Skip to main content Accessibility help
×
Hostname: page-component-76fb5796d-9pm4c Total loading time: 0 Render date: 2024-04-27T04:57:06.322Z Has data issue: false hasContentIssue false

5 - Algorithmic Law: Law Production by Data or Data Production by Law?

from Part I - Algorithms, Freedom, and Fundamental Rights

Published online by Cambridge University Press:  01 November 2021

Hans-W. Micklitz
Affiliation:
European University Institute, Florence
Oreste Pollicino
Affiliation:
Bocconi University
Amnon Reichman
Affiliation:
University of California, Berkeley
Andrea Simoncini
Affiliation:
University of Florence
Giovanni Sartor
Affiliation:
European University Institute, Florence
Giovanni De Gregorio
Affiliation:
University of Oxford

Summary

Online human interactions are a continuous matching of data that affects both our physical and virtual life. How data are coupled and aggregated is the result of what algorithms constantly do through a sequence of computational steps that transform the input into the output. In particular, machine learning techniques are based on algorithms that identify patterns in datasets. The paper explores how algorithmic rationality may be considered a new bureaucracy according to Weber’s conceptualization of legal rationality. It questions the idea that technical disintermediation may achieve the goal of algorithmic neutrality and objective decision-making. It argues that such rationality is represented by surveillance purposes in the broadest meaning. Algorithmic surveillance reduces the complexity of reality calculating the probability that certain facts happen on the basis of repeated actions.

Type
Chapter
Information
Publisher: Cambridge University Press
Print publication year: 2021
Creative Commons
Creative Common License - CCCreative Common License - BYCreative Common License - NCCreative Common License - ND
This content is Open Access and distributed under the terms of the Creative Commons Attribution licence CC-BY-NC-ND 4.0 https://creativecommons.org/cclicenses/

5.1 Introduction

Online human interactions are a continuous matching of data that affects both our physical and virtual life. How data are coupled and aggregated is the result of what algorithms constantly do through a sequence of computational steps that transform the input into the output. In particular, machine learning techniques are based on algorithms that identify patterns in datasets. The paper explores how algorithmic rationality may fit into Weber’s conceptualization of legal rationality. It questions the idea that technical disintermediation may achieve the goal of algorithmic neutrality and objective decision-making.Footnote 1 It argues that such rationality is represented by surveillance purposes in the broadest meaning. Algorithmic surveillance reduces the complexity of reality calculating the probability that certain facts happen on the basis of repeated actions. Algorithms shape human behaviour, codifying situations and facts, stigmatizing groups rather than individuals, and learning from the past: predictions may lead to static patterns that recall the idea of caste societies, in which the individual potential of change and development is far from being preserved. The persuasive power of algorithms (the so-called nudging) largely consists of small changes aimed at predicting social behaviours that are expected to be repeated in time. This boost in the long run builds a model of anti-social mutation, where actions are oriented. Against such a backdrop, the role of law and legal culture is relevant for individual emancipation and social change in order to frame a model of data production by law. This chapter is divided into four sections: the first part describes commonalities and differences between legal bureaucracy and algorithms, the second part examines the linkage between a data-driven model of law production and algorithmic rationality, the third part shows the different perspective of the socio-legal approach to algorithmic regulation, and the fourth section questions the idea of law production by data as a product of legal culture.

5.2 Bureaucratic Algorithms

‘On-life’ dimensions represent the threshold for a sustainable data-driven rationality.Footnote 2 As stated in the White Paper on AI, ‘today 80% of data processing and analysis that takes place in the cloud occurs in data centres and centralized computing facilities, and 20% in smart connected objects, such as cars, home appliances or manufacturing robots, and in computing facilities close to the user (“edge computing”)’. By means of unceasing growth of categorizations and classifications, algorithms develop mechanisms of social control connecting the dots. This entails that our actions mostly depend or are somehow affected by the usable form in which the algorithm code is rendered. In order to enhance their rational capability in calculating every possible action, algorithms aim at reducing human discretion and at structuring behaviours and decisions similarly to bureaucratic organizations. Algorithms act as normative systems that formalize certain patterns. As pointed out by Max Weber, modern capitalist enterprise is mainly based on calculation. For its existence, it requires justice and an administration whose operation can at least in principle be rationally calculated on the basis of general rules – in the same way in which the foreseeable performance of a machine is calculated.Footnote 3 This entails that, on the one hand, like bureaucracy, algorithms, in fact, use impersonal laws requiring obedience that impede free not predictable choices.Footnote 4 In fact, according to the Weberian bureaucratic ideal types, the separation between the administrative body and the material means of the bureaucratic enterprise is quintessential to the most perfect form of bureaucratic administration: political expropriation towards specialized civil servants.Footnote 5 Nonetheless, impersonality of legal rules does not entail in any case lack of responsibility by virtue of the principle of the division of labour and the hierarchical order on which modern bureaucracy is based:Footnote 6 civil servants’ responsibility is indeed to obey impersonal rules or pretend they are impersonal, whereas the exclusive and personal responsibility belongs to the political boss for his actions.Footnote 7 Bureaucracy is characterized by the objective fulfilment of duties, ‘regardless of the person’ and based on foreseeable rules and independent from human considerations.Footnote 8

On the contrary, the risk of algorithmic decision-making is that no human actor is to take responsibility for the decision.Footnote 9 The supervision and the attribution of specialized competences from the highest bureaucratic levels towards the lowest ones (Weber uses the example of ‘procurement’)Footnote 10 assures that the exercise of authority is compliant to precise competences and technical qualities.Footnote 11 Standardization, rationalization, and formalization are common aspects both for bureaucratic organizations and algorithms. Bureaucratic administration can be considered economic as far as it is fast, precise, continuous, specialized, and avoids possible conflicts.Footnote 12 Testing algorithms as legal rational means imposes a double question: (1) whether through artificial intelligence and isocratic forms of administration the explainability of algorithmic processes improves the institutional processes and in what respect towards staff competence and individual participation, and (2) whether algorithms take on some of the role of processing institutional and policy complexity much more effectively than humans.Footnote 13

According to Aneesh, ‘bureaucracy represents an “efficient” ideal-typical apparatus characterized by an abstract regularity of the exercise of authority centred on formal rationality’.Footnote 14 In fact, algorithms ‘are trained to infer certain patterns based on a set of data. In such a way actions are determined in order to achieve a given goal’.Footnote 15 The socio-technical nature of public administration consists in the ability to share data: this is the enabler of artificial intelligence for rationalization. Like bureaucracy, algorithms would be apparently compatible with three Weberian rationales: the Zweckverein (purpose union), as an ideal type of the voluntary associated action; the Anstalt (institution), as an ideal type of institutions, rational systems achieved throughout coercive measures; the Verband (social group), as an ideal type of common action that aims to an agreement for a common purpose.Footnote 16 According to the first rationale, algorithms are used to smoothly guide a predictable type of social behaviour through data extraction on an ‘induced’ and mostly accepted voluntary basis;Footnote 17 as for the second, the induction of needs is achieved through forms of ‘nudging’, such as the customization of contractual forms and services based on profiling techniques and without meaningful mechanisms of consent; finally, the legitimacy is based on the social agreement on their utility to hasten and cheapen services (automation theory) or also improve them (augmentation system).Footnote 18

However, unlike bureaucracy, technology directly legitimizes action enabling users with the bare option ‘can/cannot’. Legitimacy is embedded within the internal rationality of technology. As Pasquale observes, ‘authority is increasingly expressed algorithmically’.Footnote 19 Moreover, similar to the rise of bureaucratic action, technologies have been thought to be controlled by the exercise of judicial review not to undermine civil liberties and equality. As a matter of fact, algorithmic systems are increasingly being used as part of the continuous process of Entzauberung der Welt (disenchantment of the world) – the achievement of rational goals through organizational measures – with potentially significant consequences for individuals, organizations and societies as a whole.

There are essentially four algorithmic rational models of machine learning that are relevant for law-making: the Neural Networks that are algorithms learning from examples through neurons organized in layers; the Tree Ensemble methods that combine more than one learning algorithm to improve the predictive power of any of the single learning algorithms that they combine; the Support Vector Machines that utilize a subset of the training data, called support vectors, to represent the decision boundary; the Deep Neural Network that can model complex non-linear relationship with multiple hidden layers.Footnote 20

Opaqueness and automation are their main common features, consisting of the secrecy of the algorithmic code and the very limited human input.Footnote 21 This typical rationality is blind, as algorithms – Zuboff notes – inform operations given the interaction of these two aspects. Nonetheless, explainability and interpretability are also linked to the potential of algorithmic legal design as rational means.Footnote 22 Rational algorithmic capability is linked to the most efficient use of data and inferences based on them. However, the development of data-driven techniques in the algorithmic architecture determines a triangulation among market, law, and technology. To unleash the full potential of data, rational means deployed to create wider data accessibility and sharing for private and public actors are now being devised in many areas of our lives. However, it should be borne in mind that the use of algorithms as a tool for speeding up the efficiency of the public sector cannot be separately examined from the risk of algorithmic surveillance based on indiscriminate access to private-sector data.Footnote 23 This is due to the fact that the entire chain of services depends upon more or less overarching access to private sector data. Access to those data requires a strong interaction between public actors’ political power and private actors’ economic and technological capability. This dynamic is pervasive as much as it entirely dominates our daily life from market strategy to economic supply. Furthermore, once the ‘sovereigns’ of the nation-states and their borders have been trumped, data flows re-articulate space in an endless way. The paradox of creating space without having a territory is one of the rationales of the new computational culture that is building promises for the future.

5.3 Law Production by Data

Law production is increasingly subjected to a specialized rationality.Footnote 24 Quantitative knowledge feeds the aspiration of the state bureaucracy’s ‘rationality’, since it helps dress the exercise of public powers of an aura of technical neutrality and impersonality, apparently leaving no room to the discretion of the individual power.Footnote 25 Behind the appearance of the Weberian bureaucratic principle sine ira et studio – which refers to the exclusion of affective personal, non-calculable, and non-rational factors in the fulfilment of civil servants’ dutiesFootnote 26 – the use of classification and measurement techniques affecting human activities generate new forms of power that standardize behaviours for forecasting expectations, performances and conducts of agents.Footnote 27 As correctly highlighted by Zuboff, ‘instrumentarian power reduces the human experience to measurable observable behaviour while remaining steadfastly indifferent to the meaning of that experience’.Footnote 28 However, even though the production of the law through customized and tailored solutions can be a legitimate goal of computational law, it is not all. Social context may change while the law is ruling, but technology reflects changing social needs in a more visible way than the law and apparently provides swifter answers.Footnote 29 On the contrary, the law should filter daily changes, including technological ones, into its own language, while it is regulating a mutable framework. To be competing with other prescriptive systems, the law may be used either as an element of computational rationality or a tool to be computable itself for achieving specific results. In the first case, the law guides and shrinks the action of rational agents through the legal design of algorithms, as an external constraint. In the second case, regulatory patterns are conveyed by different boosts that use the law in a predetermined way for achieving a given goal. Depending on which of those models is chosen, there could be a potential risk for the autonomy of the law in respect to algorithmic rationality. Both the autonomy of the law and the principle of certainty applicable to individuals are at stake. This is an increasingly relevant challenge since the whole human existence is fragmented through data.

Against these two drawbacks, the law may develop its internal rationality even in a third way: as the product of the legal culture that copes with social challenges and needs. Essentially, legal culture is a tough way in which society reflects upon itself, through doctrinal and conceptual systems elaborated by lawyers; through interpretation; and through models of reasoning.Footnote 30 This entails the law being a rational means not only due to its technical linguistic potentialFootnote 31 but also due to its technical task aimed at producing social order.Footnote 32 As Weber notes, the superiority of bureaucratic legal rationality over other rational systems is technical.

Nonetheless, not all times reflect a good legal culture, as this can be strongly affected by political and social turmoil. In the age of datification, all fragments of daily life are translated into data, and it is technically possible to shape different realities on demand, including information politics and market. The creation of propensities and assumptions through algorithms as a basis of a pre-packaged concept of the law – driven by colonizing factors – breaks off a spontaneous process through which legal culture surrounds the law. As a result, the effects of algorithmic legal predictions contrast with the goal of legal rationality, which is to establish certain hypotheses and to cluster factual situations into them. The production of the legal culture entails the law being the outcome of a specific knowledge and normative meanings as the result of a contextual Weltanschauung. This aspect has nothing to do either with the legitimacy or with the effectiveness, rather with the way in which the law relies on society. In particular, the capability to produce social consequences that are not directly addressed by the law, by suggesting certain social behaviours and by activating standardized decisions on a large scale, represents such a powerful tool that has been considered the core of algorithmic exception states.Footnote 33 The idea of exception is explained by the continuous confusion between the rule of causality and the rule of correlation.Footnote 34 Such a blurring between cause and effects, evidences and probabilities, causal inferences and variables, affects database structures, administrative measures that are showed under the form of the algorithmic code, and ultimately rules.Footnote 35 Algorithms lack adaptability because they are based on a casual model that cannot replicate the inferential process of humans to which the general character of the law refers. Human causal intuitions dominate uncertainty differently from machine learning techniques.Footnote 36

Data is disruptive for its capability to blur the threshold between what is inside and what is outside the law. The transformation of the human existence into data is at the crossroad of the most relevant challenges for law and society. Data informs the functioning of legal patterns, but it can be also a component of law production. A reflection on the social function of the law in the context of algorithmic rationality is useful in order to understand what type of data connections are created for regulatory purposes within an ‘architecture of assumptions’, to quote McQuillan. Decoding algorithms sometimes allows one to interpret such results, even though the plurality and complexity of societal patterns cannot be reduced to the findings of data analysis or inferential interpretation generated by automated decision-making processes. The growing amount of data, despite being increasingly the engine of law production, does reflect the complexity of the social reality, which instead refers to possible causal interactions between technology, reality and regulatory patterns, and alternative compositions of them, depending upon uncertain variables. Datification, on which advanced technologies are generally based, has profoundly altered the mechanisms of production of legal culture, which cannot be easily reduced to what data aggregation or data analysis is. Relevant behaviours and social changes nourish inferences that can be made from data streams: despite the fact that they can be the output of the law, they will never be the input of the legal culture. Between the dry facts and the causal explanation, there is a very dense texture for the elaboration of specialized jurists, legal scholars, and judges. Furthermore, globalization strongly shapes common characters across different legal traditions no longer identifiable with an archetypal idea of state sovereignty. This depends upon at least two factors: on the one hand, the increasing cooperation between private and public actors in data access and information management beyond national borders; on the other hand, the increasing production of data from different sources. Nonetheless, not much attention has been paid to the necessity of safeguarding the space of the legal culture in respect to law overproduction by data. Regulation of technology combined with the legal design of technology tends to create a misleading overlap between both, because technological feasibility is becoming the natural substitution of legal rationales. Instead, I argue that the autonomous function of the legal culture should be revenged and preserved as the theoretical grid for data accumulation. What legal culture calls into question is the reflexive social function of the law that data-driven law erases immediately by producing a computational output. In addition, the plurality of interconnected legal systems cannot be reduced to data. The increasing production of the law resulting from data does not reflect the complexity of social reality. How data and technologies based on them affect the rise of legal culture and the production of data-driven laws has not only to do with data. According to a simple definition of legal culture as ‘one way of describing relatively stable patterns of legally oriented social behaviour and attitudes’,Footnote 37 one may think of data-driven law as a technologically oriented legal conduct.

‘Commodification of “reality” and its transformation into behavioural data for analysis and sales’,Footnote 38 defined by Zuboff as surveillance capitalism, has made the private human experience a ‘free raw material’Footnote 39 that can be elaborated and transformed into behavioural predictions feeding production chain and business. Data extraction allows the capitalistic system to know all about all. It is a ‘one-way process, not a relationship’, which produces identity fragmentation and attributes an exchange value to single fragments of the identity itself.Footnote 40 Algorithmic surveillance indeed produces a twofold phenomenon: on the one hand, it forges the extraction process itself, which is predetermined to be predictive; on the other hand, it determines effects that are not totally explainable, despite all accurate proxies input into the system. Those qualities are defined operational variables that are processed at a very high speed so that it is hard for humans to monitor them.Footnote 41

In the light of an unprecedented transformation that is radically shaping the development of personality as well as common values, the role of the law should be not only to guarantee ex post legal remedies but also to reconfigure the dimension of human beings, technology, and social action within integrated projects of coexistence with regulatory models. When an individual is subject to automation – the decision-making process, which determines the best or worst chances of well-being, the easiest or least opportunities to find a good job, or in the case of the predictive police, a threat to the presumption of innocence – the social function of the law is necessary to cope with the increasing complexity of relevant variables and to safeguard freedom. Power relationships striving to impose subjugation vertically along command and obedience relationships are replaced by a new ‘axiomatic’ one: the ability to continuously un-code and re-code the lines along which information, communication, and production intertwine, combining differences rather than forcing unity.

5.4 The Socio-legal Approach

The current socio-legal debate on algorithmic application on legal frameworks is very much focused on issues related to data-driven innovation. Whereas the internal approach is still dominant in many regulatory areas, the relationship between law and technology requires an external perspective that takes into account different possibilities. As the impact of artificial intelligence on the law produces social and cultural patterns, a purely internal legal approach cannot contribute to a comprehensive understanding. However, whereas the law produces bindings effects depending on if certain facts may or not happen, algorithms are performative in the sense that the effect that they aim to produce is encompassed in the algorithmic code. The analysis of both the benefits and the risks of algorithmic rationality have societal relevance for the substantial well-being of individuals. On one hand, the lack of an adequate sectoral regulatory framework requires a cross-cutting analysis to highlight potential shortcomings in the existing legal tools and their inter-relationships. In addition, operational solutions should be proactive in outlining concrete joined-up policy actions, which also consider the role of soft-law solutions. On the other hand, the potential negative impact of biased algorithms on rights protection and non-discrimination risks establishing a legal regime for algorithmic rationality that does not meet societal needs. In order to address the interplay between societal needs, rights, and algorithmic decision-making, it is relevant to pinpoint several filters on the use of AI technology in daily life.

For example, a social filter sets a limits for the manner in which technology is applied on the basis of the activities of people and organizations. A well-known recent example of a social filter is how taxi drivers and their backing organizations have opposed transport platforms and services. An institutional filter sets institutionally determined limits on the ways in which technology can be applied. This type of institutional system includes the corporate governance model, the education system, and the labour market system. A normative filter sets regulatory and statute-based limitations on the manner in which technology can be applied. For example, the adoption of self-steering vehicles in road traffic will be slow until the related issues regarding responsibilities have been conclusively determined in legislation. Last but not least, an ethical filter sets restrictions on the ways in which technology is applied.

A further step requires identifying a changing legal paradigm that progressively shifts attention from the idea of a right to a reasonable explanation of the algorithm as a form of transparency to the right to reasonable inferences (through the extensive interpretation of the notion of personal data that it includes the notion of decisional inference) or towards an evolutionary interpretation of the principle of good administration.Footnote 42 The evolutionary interpretation of the principle of good administration has hinged on the algorithmic ‘black box’ within a more fruitful path, oriented towards the legality and responsibility of the decision maker in the algorithmic decision-making process. This is particularly relevant in the field of preventive surveillance, for example, as it is mainly a public service whose technological methods can be interpreted in the light of the principle of good administration.

More broadly, the rationale of AI in the digital single market should inter alia guarantee: (1) better services that are cost-efficient; (2) unifying cross-border public services, increasing efficiency and improving transparency; (3) promoting the participation of individuals in the decision-making process; and (4) improving the use of AI in the private sector as a potential to improve business and competitiveness.Footnote 43

In order to achieve these objectives, it is necessary to evaluate the social impact, as well as the risks and opportunities, that the interaction between public and private actors in accessing data through the use of algorithmic rationality combined with legal rationality entails. However, the optimization of organizational processes in terms of efficiency, on the one hand, and the degree of users’ satisfaction, on the other hand, are not relevant factors to face the impact of algorithms on rights. The law preserving individual chances of emancipation is at the centre of this interaction, constituting the beginning and the end of the causal chain, since both the production of law for protecting rights and the violation of rights significantly alter this relationship. This aspect is significant, for instance, in the field of machine learning carried out on the basis of the mass collection of data flows, from which algorithms are able to learn. The ability of machine learning techniques to model human behaviour, to codify reality and to stigmatize groups, increases the risk of couching static social situations, undermining the free and self-determined development of personality. Such a risk is real, irrespective of the fact that algorithms are used to align a legal system to a predetermined market model or to reach a precise outcome of economic policy. In both cases, algorithms exceed the primary function of the law, which is to match the provision of general and abstract rules with concrete situations through adaptive solutions. Such an adaptation process is missing in the algorithmic logic, because the algorithmic code is unchangeable.

Law as a social construction is able to address specific situations and change at the same time in its interpretation or according to social needs. Indeed, law should advocate an emancipatory function for human beings who are not subject to personal powers. If applied to algorithmic decision-making in the broadest context, the personality of laws may result in tailored and fragmented pictures corresponding to ‘social types’ on the basis of profiling techniques. This is the reason why law production by data processed through algorithms cannot be the outcome of any legal culture, as it would be a pre-packaged solution regardless of the institutional and political context surrounding causes and effects. Nonetheless, the increasing tailored production of data-driven law through algorithmic rationality cannot overcome such a threshold in a way that enables a decision-making process – at every level of daily life – being irrespective of autonomy, case-by-case evaluation, and freedom.

The alignment of legal requirements and algorithmic operational rules must always be demonstrated ex post both at a technical level and at a legal level in relation to the concrete case.

5.5 Data Production by Law

Against the backdrop of data-driven law, legal rationality should be able to frame a model rather based on data production by law. However, a real challenge that should be borne in mind is that algorithmic bureaucracy does not need a territory as legal bureaucracy.Footnote 44 Algorithmic systems are ubiquitous, along with data that feed machine learning techniques. Whereas a bureaucratic state is a way to organize and manage the distribution of power over and within a territory, algorithms are not limited by territory. Sovereignty’s fragmentation operated by data flows shows that virtual reality is a radical alternative form to territorial sovereignty and cannot be understood as a mere assignment of sovereign powers upon portions of data. The ubiquity of data requires a new description of regulatory patterns in the field of cross-border data governance as data location that would create under certain conditions the context of the application of legal regime, and the exclusion of another is not necessarily a criterion which is meaningfully associated with the data flow. Data is borderless, as it can be scattered everywhere across different countries.Footnote 45 Although data can be accessed everywhere irrespective of where it is located, its regulation and legal effects are still anchored to the territoriality principle. Access to data does not depend on physical proximity; nor are regulatory schemes arising from data flows intrinsically or necessarily connected to any particular territory. Connection with territory must justify jurisdictional concerns but does not have much to do with physical proximity. Such disconnection between information and its un-territorial nature potentially generates conflicts of law and may produce contrasting claims of sovereign powers.Footnote 46 This is magnified by algorithmic systems that do not have a forum loci because they are valid formulations regardless of the geographical space where they are applied. Furthermore, they gather data sets irrespective of borders or jurisdictions. Bureaucracy’s functioning much depends upon borders, as it works only within a limited territory.Footnote 47 On the contrary, algorithms are unleashed from territories but can affect multiple jurisdictions, as the algorithmic code is territorially neutral. This may be potentially dangerous for two reasons: on the one hand, algorithms can transversally impact different jurisdictions, regardless of the legal systems and regulatory regimes involved; on the other hand, the disconnection of the algorithmic code from territory and implies a law production that does not emerge from legal culture. Even though legal culture is not necessarily bound to the concept of state sovereignty,Footnote 48 it is inherent to a territory as a political and social space. Weber rejects the vision of the modern judge as a machine in which ‘documents are input together with expenses’ and which spits out the sentence together with the motives mechanically inferred from the paragraphs. Indeed, there is the space for the individualizing assessment in respect of which the general norms have a negative function in that they limit the official’s positive and creative activity.Footnote 49 This massive difference between legal rationality and algorithmic rationality imposes rethinking the relationship between law, technology, and legal culture. Data production by law can be a balanced response to reconnect algorithmic codes to the boundaries of jurisdictions. Of course, many means of data production by law exist. A simple legal design of data production is not the optimal option. Matching algorithmic production of data and legal compliance can be mechanically ensured through the application of certain patterns that are inserted in the algorithmic process. Instead, the impact of legal culture over the algorithmic production of data shape a socio-legal context inspiring the legal application of rules on data production.

The experience of the Italian Administrative Supreme Court (Council of State) is noteworthy. After the leading case of 8 April 2019 n. 2270 that opened the path to administrative algorithmic decision-making, the Council of State confirmed its case law.Footnote 50 It holds the lawfulness of automated decision-making in administrative law, providing limits and criteria.Footnote 51 It extended for the first time the automated decision-making both to public administration’s discretionary and binding activities. The use of algorithmic administrative decision-making is encompassed by the principle of good performance of administration pursuant to article 97 of the Italian Constitution. The Council stated that the fundamental need for protection posed by the use of the so-called IT tool algorithmic is transparency due to the principle of motivation of the decision.Footnote 52 It expressly denied algorithmic neutrality, holding that predictive models and criteria are the result of precise choices and values. Conversely, the issue of the dangers associated with the instrument is not overcome by the rigid and mechanical application of all detailed procedural rules of Law no. 241 of 1990 (such as, for example, the notice of initiation of the proceeding).

The underlying innovative rationale is that the ‘multidisciplinary character’ of the algorithm requires not only legal but technical, IT, statistical, and administrative skills, and does not exempt from the need to explain and translate the ‘technical formulation’ of the algorithm into the ‘legal rule’ in order to make it legible and understandable.

Since algorithm becomes a modality of the authoritative decision, it is necessary to determine specific criteria for their use. Surprisingly, the Council made an operation of legal blurring, affirming that knowability and transparency must be interpreted according to articles 13, 14, and 15 GDPR. In particular, the interested party must be informed of the possible execution of an automated decision-making process; in addition, the owner of algorithms must provide significant information on the logic used, as well as the importance and expected consequences of this treatment for the interested party.

Additionally, the Council adopted three supranational principles: (1) the full knowability of the algorithm used and the criteria applied pursuant to article 42 of the EU Charter (‘Right to a good administration’), according to which everyone has the right to know the existence of automated decision-making processes concerning him or her and, in this case, to receive significant information on the logic used; (2) the non-exclusivity of automated decision-making, according to which everyone has the right not to be subjected to solely automated decision-making (similarly to article 22 GDPR); and (3) the non-discrimination principle, as a result of the application of the principle of non-exclusivity, plus data accuracy, minimization of risks of errors, and data security.Footnote 53 In particular, the data controller must use appropriate mathematical or statistical procedures for profiling, implementing adequate technical and organizational measures in order to ensure correction of the factors that involve data inaccuracy, thus minimizing the risk of errors.Footnote 54 Input data should be corrected to avoid discriminatory effects in decision-making output. This operation requires the necessary cooperation of those who instruct the machines that produce these decisions. The goal of a legal design approach is to filter data production through the production of potential algorithmic harms and the protection of individual rights, and figure out which kind of legal remedies are available and also useful to individuals. The first shortcoming of such endeavour is that – given for granted the logic of garbage in/garbage out, according to which inaccurate inputs produce wrong outputs – it is noteworthy that a legal input is not a sufficient condition to produce a lawful output. Instead, an integrated approach such as the one adopted by the Council of State is based on more complex criteria to consider the lawfulness of algorithmic decision-making, also in respect of actors involved. First, it is necessary to ensure the traceability of the final decision to the competent body pursuant to the law conferring the power of the authoritative decision to the civil servants in charge.Footnote 55 Second, the comprehensibility of algorithms must involve all aspects but cannot result in harm for IP rights. In fact, pursuant to art. 22, let. c, Law 241/90 holders of an IP right on software are considered counter-interested,Footnote 56 but Consiglio di Stato does not specifically address the issue of holders of trade secrets.

5.6 Conclusions: TOWARDS DATA PROTECTION OF LAW

While discussing similarities between bureaucratic and algorithmic rationality, I voluntarily did not address the issue of secrecy. According to Weber, each power that aims to its preservation is a secret power in one of its features. Secrecy is functional for all bureaucracies to the superiority of their technical tasks towards other rational systems.Footnote 57 Secrecy is also the fuel of algorithmic reasoning, as its causal explanation is mostly secret. This common aspect, if taken for granted as a requirement of efficient rational decision-making, should be weighted in a very precise way in order to render algorithms compliant with the principle of legality.

This chapter has explored how algorithmic bureaucracy proves to be a valuable form of rationality as far as it does not totally eliminate human intermediation under the form of imputability, responsibility, and control.Footnote 58 To be sure, this may happen only under certain conditions that are summarized as follows: (1) Technological neutrality for law production cannot be a space ‘where legal determinations are de-activated’Footnote 59 in such a way that externalizes control. (2) Law production by data is not compatible with Weberian’s legal rationality. (3) Translation of technical rules into legal rules needs to be filtered through legal culture. (4) Data production by law is the big challenge of algorithmic rationality. (5) Algorithmic disconnection from territory cannot be replaced by algorithmic global surveillance. (6) Legal design of algorithmic functioning is not an exhaustive solution. (7) The linkage of automated decision-making to the principle of good administration is a promising trajectory along which concepts such as traceability, knowability, accessibility, readability, imputability, responsibility, and non-exclusivity of the automated decision have been developed in the public interest.

All these conditions underlie a regulatory idea that draws the role of lawyers from what Max Weber defined as die geistige Arbeit als Beruf (the spiritual work as profession). In this respect, algorithmic rationality may be compatible with a legal creative activity as long as a society is well equipped with good lawyers.Footnote 60 The transformation of law production by data into data production by law is a complex challenge that lawyers can drive if they do not give up being humanists for being only specialized experts.Footnote 61 From this perspective, algorithmic bureaucratic power has a good chance of becoming an ‘intelligent humanism’.Footnote 62 To accomplish this task, the law should re-appropriate its own instruments of knowledge's production. This does not mean to develop a simplistic categorization of legal compliance requirements for machine-learning techniques. Nor it only relies on the formal application of legal rationality to the algorithmic process. In the long run, it shall bring towards increasing forms of data production of law. Data production of law defines the capability of the law to pick and choose those data that are relevant to elaborate new forms of legal culture. How the law autonomously creates knowledge from experiences that impact on society is a reflexive process that needs institutions as well as individuals. As much as this process is enshrined in a composite legal culture, the law has more chances to recentre its own role in the development of democratic societies.

Footnotes

1 Massimo Airoldi and Daniele Gambetta, ‘Sul mito della neutralità algoritmica’, (2018) 4 The Lab’s Quarterly, 29.

2 Luciano Floridi, The Onlife Manifesto. Being Human in a Hyperconnected Era (Springer, 2015).

3 Max Weber, Economia e società, (Edizioni di Comunità, 1st ed., 1974), 687.

4 Chiara Visentin, ‘Il potere razionale degli algoritmi tra burocrazia e nuovi idealtipi’, The Lab’s Quarterly, 4772, 57, 58.

5 Max Weber, ‘Politics as a Vocation’, in Hans Gehrt (ed.) and C. Wright Mills (trans.), From Max Weber: Essays in Sociology (Oxford University Press, 1946), 77; Economia e società, 685.

6 Max Weber, Economia e società, 260, 262.

7 Max Weber, ‘Politics as a vocation’, 88.

8 Max Weber, Economia e società, 278.

9 Karen Yeung, ‘Why Worry about Decision-Making by Machine?’, in Karen Yeung and Martin Lodge (eds.), Algorithmic Regulation (Oxford University Press, 2019), 24. However, there is a big debate on digital personhood and responsibility; see G. Teubner, ‘Digital Personhood: The Status of Autonomous Software Agents in Private Law’, (2018) Ancilla Iuris, 35. According to the Robotic Charter of the EU Parliament, in the event that a robot can make autonomous decisions, the traditional rules are not sufficient to activate liability for damages caused by a robot, as they would not allow to determine which is the person responsible for the compensation or to demand from this person to repair the damage caused.

10 Max Weber, Economia e società, 269.

13 Thomas Vogl, Cathrine Seidelin, Bharath Ganesh, and Jonathan Bright, ‘Algorithmic Bureaucracy. Managing Competence, Complexity, and Problem Solving in the Age of Artificial Intelligence’ (2019), https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3327804.

14 A. Aneesh, ‘Technologically Coded Authority: The Post-Industrial Decline in Bureaucratic Hierarchies’ (2002) Stanford University Papers, http://web.stanford.edu/class/sts175/NewFiles/AlgocraticGovernance.pdf.

15 European Commission, White Paper on Artificial Intelligence – A European Approach to Excellence and Trust, ‘The output of the AI system does not become effective unless it has been previously reviewed and validated by a human’, https://ec.europa.eu/info/sites/info/files/commission-white-paper-artificial-intelligence-feb2020_en.pdf, 21.

16 Furio Ferraresi, ‘Genealogie della legittimità. Città e stato in Max Weber’, (2014) 5 Società Mutamento Politica, 143, 146.

17 On the concept of data extraction, see Deborah De Felice, Giovanni Giuffrida, Giuseppe Giura, Vilhelm Verendel, and Calogero G. Zarba, ‘Information Extraction and Social Network Analysis of Criminal Sentences. A Sociological and Computational Approach’, (2013) Law and Computational Social Science, 243262, 251.

18 Michael Veale and Irina Brass, ‘Administration by Algorithm?’, in Karen Yeung and Martin Lodge (eds.), Algorithmic Regulation (Oxford University Press, 2019), 123125; Anthony J Casey and Anthony Niblett, ‘A Framework for the New Personalization of Law’ (2019) 86 University of Chicago Law Review 333, 335.

19 Frank Pasquale, The Black Box Society: The Secret Algorithms That Control Money and Society (Harvard University Press, 2015), 8.

20 Riccardo Guidotti et al., ‘ A Survey of Methods for Explaining Black Box Models’ (2018), ACM Computing Surveys, February, 1, 18.

21 T. Zarsky, ‘The Trouble with Algorithmic Decisions: An Analytic Road Map to Examine Efficiency and Fairness in Automated and Opaque Decision Making’, (2016) 41 Science, Technology, & Human Values, 119.

22 Riccardo Guidotti et al. at Footnote n. 19, 5.

23 Ira Rubinstein, ‘Big Data: The End of Privacy or a New Beginning?’, (2013) 3 International Data Privacy Law, 74.

24 Marta Infantino, Numera et impera. Gli indicatori giuridici globali e il diritto comparato (Franco Angeli, 2019), 29.

25 Enrico Campo, Antonio Martella, and Luca Ciccarese, ‘Gli algoritmi come costruzione sociale. Neutralità, potere e opacità’, (2018) 4 The Lab’s Quarterly, 7.

26 Max Weber, Economia e società, 278.

27 David BeerThe Social Power of Algorithms’, (2017) 20 Information, Communication & Society, 113.

28 Shoshana Zuboff, The Age of Surveillance Capitalism. The Fight for a Human Future at the New Frontier of Power (Public Affairs, 2019), 376377.

29 Karen Yeung and Martin Lodge, ‘Introduction’, in Karen Yeung and Martin Lodge (eds.), Algorithmic Regulation (Oxford University Press, 2019), 5.

30 Giovanni Tarello, Cultura giuridica e politica del diritto (Il Mulino, 1988), p. 24, 25.

33 According to Dan McQuillan, ‘Algorithmic States of Exception, European Journal of Cultural Studies’, (2015) 18 European Journal of Cultural Studies, 564, 569: ‘While tied to clearly constituted organisational and technical systems, the new operations have the potential to create social consequences that are unaddressed in law.’

36 For a deep analysis of causality and correlation, see Judea Pearl and Dana Mackenzie, The Book of Why: The New Science of Cause and Effect (Penguin Books, 2018), 27.

37 David Nelken, ‘Using the Concept of Legal Culture’, p. 1.

38 Lionel Ching Kang Teo, ‘Are All Pawns in a Simulated Reality? Ethical Conundrums in Surveillance Capitalism’, 10 June 2019, https://anthrozine.home.blog/tag/capitalism/.

39 Shoshana Zuboff, The Age of Surveillance Capitalism, The Definition (Penguin Books, 2015).

40 Shoshana Zuboff, ‘Big Other: Surveillance Capitalism and the Prospects of an Information Civilization’, (2015) 30 Journal of Information Technology, 7589.

41 Frederik Z. Borgesius, Discrimination, Artificial Intelligence, and Algorithmic Decision-Making (Council of Europe, Strasbourg, 2018), https://rm.coe.int/discrimination-artificial-intelligence-and-algorithmic-decision-making/1680925d73, 10; Karen Yeung, at Footnote n. 8, 25.

42 Riccardo Guidotti, Anna Monreale, Salvatore Ruggieri, Franco Turini, Dino Pedreschi, and Fosca Giannotti, ‘A Survey of Methods for Explaining Black Box Models’, ACM Computing Surveys, February 2018, 1 ff.

43 European Commission, A European Strategy for Data, 66 final, 19 February 2020, https://ec.europa.eu/info/sites/info/files/communication-european-strategy-data-19feb2020_en.pdf.

44 Max Weber, Economia e società, 253.

45 Jennifer Daskal, ‘Data Un-territoriality’, (2015) 125 The Yale Law Journal, 326.

46 Andrew Keane Woods, ‘Litigating Data Sovereignty’, (2018) 128 The Yale Law Journal, 328.

47 Footnote Ibid., 203, 204, 205.

48 David Nelken, ‘Using the Concept of Legal Culture’, (2004) 29 Australian Journal of Legal Philosophy, 4: ‘Given the extent of past and present transfer of legal institutions and ideas, it is often misleading to try and relate legal culture only to its current national context.’

49 Max Weber, Economia e società, 281–282.

50 See Nicolò Muciaccia, ‘Algoritmi e procedimento decisionale: alcuni recenti arresti della giustizia amministrativa’, (2020) 10 Federalismi.it, 344, www.sipotra.it/wp-content/uploads/2020/04/Algoritmi-e-procedimento-decisionale-alcuni-recenti-arresti-della-giustizia-amministrativa.pdf.

51 Consiglio di Stato, sec VI, 13 December 2019, n. 8472, n. 8473, n. 8474. Against the application of algorithmic decision-making to administrative proceedings, see T.A.R. Lazio Roma, sec. III bis, 27 May 2019, n. 6606 and T.A.R. Lazio Roma, sec. III bis, 13 September 2019, n. 10964.

52 Consiglio di Stato, sec. VI, 8 April 2019, n. 2270. See Gianluca Fasano, ‘Le decisioni automatizzate nella pubblica amministrazione: tra esigenze di semplificazione e trasparenza algoritmica’, (2019) 3 Medialaws, www.medialaws.eu/rivista/le-decisioni-automatizzate-nella-pubblica-amministrazione-tra-esigenze-di-semplificazione-e-trasparenza-algoritmica/.

53 See Enrico Carloni, ‘AI, algoritmi e pubblica amministrazione in Italia’, (2020) 30 Revista de los Estudios de Derecho y Ciencia Política, www.uoc.edu/idp.

54 Consiglio di Stato recalls recital 71 GDPR.

55 Similarly, see T.A.R. Lazio Roma, sec. III bis, 28 May 2019, n. 6686; Consiglio di Stato, sec VI, 4 February 2020, n. 881.

56 Consiglio di Stato, sec. VI, 2 January 2020, Footnote n. 30.

57 Max Weber, Economia e società, 257, 276; Massimo Cacciari, Il lavoro dello spirito, (Adelphi, 2020).

58 On the idea of adapting technology, see Luciano Gallino, Tecnologia e democrazia. Conoscenze tecniche e scientifiche come beni pubblici (Einaudi, 2007), 132, 195.

59 Dan McQuillan at Footnote n. 32, 570.

60 Anthony T. Kronman, Education’s End Why Our Colleges and Universities Have Given Up on the Meaning of Life (Yale University Press, 2007), 205; Margerita Ramajoli, ‘Quale cultura per l’amministrazione pubblica?’, in Beatrice Pasciuta and Luca Loschiavo (eds.), La formazione del giurista. Contributi a una riflessione (Roma Tre-press, 2018), 103.

61 According to Roderick A. Macdonald and Thomas B. McMorrow, ‘Decolonizing Law School’, (2014) 51 Alberta Law Review, 717: ‘The process of decolonizing law school identified by the authors is fundamentally a process of moving the role of human agency to the foreground in designing, building, and renovating institutional orders that foster human flourishing.’

62 David Howarth, ‘Is Law a Humanity (Or Is It More Like Engineering)?’, (2004) 3 Arts & Humanities in Higher Education, 9.

Save book to Kindle

To save this book to your Kindle, first ensure coreplatform@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×