1.1 Introduction
Technologies have always led to turning points in society.Footnote 1 In the past, technological developments have opened the door to new phases of growth and change, while influencing social values and principles. Algorithmic technologies fit within this framework. These technologies have contributed to introducing new ways to process vast amounts of data.Footnote 2 In the digital economy, data and information are fundamental assets which can be considered raw materials the processing of which can generate value.Footnote 3 Even simple pieces of data, when processed with a specific purpose and mixed with other information, can provide models and predictive answers. These opportunities have led to the rise of new applications and business models in a new phase of (digital) capitalism,Footnote 4 as more recently defined as information capitalism.Footnote 5
Although these technologies have positive effects on the entire society since they increase the capacity of individuals to exercise rights and freedoms, they have also led to new constitutional challenges. The opportunities afforded by algorithmic technologies clash with their troubling opacity and lack of accountability, in what has been defined as an ‘algocracy’.Footnote 6 It is no coincidence that transparency is at the core of the debate about algorithms.Footnote 7 There are risks to fundamental rights and democracy inherent in the lack of transparency about the functioning of automated decision-making processes.Footnote 8 The implications deriving from the use of algorithms may have consequences on individuals’ fundamental rights, such as the right to self-determination, freedom of expression, and privacy. However, fundamental rights do not exhaust the threats which these technologies raise for constitutional democracies. The spread of automated decision-making also challenges democratic systems due to its impact on public discourse and the impossibility of understanding decisions that are made by automated systems affecting individual rights and freedoms.Footnote 9 This is evident when focusing on how information flows online and on the characteristics of the public sphere, which is increasingly personalised rather than plural.Footnote 10 Likewise, the field of data is even more compelling due to the ability of data controllers to affect users’ rights to privacy and data protection by implementing technologies the transparency and accountability of which cannot be ensured.Footnote 11 The possibility to obtain financing and insurance or the likelihood of a potential crime are only some examples of the efficient answers which automated decision-making systems can provide and of how such technologies can affect individuals’ autonomy.Footnote 12
At a first glance, algorithms seem like neutral technologies processing information which can lead to a new understanding of reality and predict future dynamics. Technically, algorithms, including artificial intelligence technologies, are just methods to express results based on inputs made up of data.Footnote 13 This veil of neutrality falls before their human fallacy. Processes operated by algorithms are indeed value-laden, since technologies are the result of human activities and determinations.Footnote 14 The contribution of humans in the development of data processing standards causes the shift of personal interests and values from the human to the algorithmic realm. If, from a technical perspective, algorithms are instruments that extract value from data, then moving to the social perspective, such technologies constitute automated decision-making processes able to affect society and thus also impacting on constitutional values, precisely fundamental rights and democratic values.
Within this challenging framework between innovation and risk, it is worth wondering about the role of regulation and policy in this field. Leaving the development of algorithmic technologies without safeguards and democratic oversight could lead society towards techno-determinism and the marginalisation of public actors, which would lose their role in ensuring the protection of fundamental rights and democratic values. Technology should not order society but be a means of promoting the evolution of mankind. Otherwise, if the former will order the drive of the latter in the years to come, we could witness the gradual vanishing of democratic constitutional values in the name of innovation.
Since algorithms are becoming more and more pervasive in daily life, individuals will increasingly expect to be aware of the implications deriving from the use of these technologies. Individuals are increasingly surrounded by technical systems influencing their decisions without the possibility of understanding or controlling this phenomenon and, as a result, participating consciously in the democratic debate. This situation is not only the result of algorithmic opacity, but it is firmly linked to the private development of algorithmic technologies in constitutional democracies. Because of the impact of these technologies on our daily lives, the predominance of businesses and private entities in programming and in guiding innovation in the age of artificial intelligence leads one to consider the role and responsibilities of these actors in the algorithmic society. The rise of ‘surveillance capitalism’ is not only a new business framework but a new system to exercise (private) powers in the algorithmic society.Footnote 15
We believe that constitutional law plays a critical role in addressing the challenges of the algorithmic society. New technologies have always challenged, if not disrupted, the social, economic, legal, and, to a certain extent, ideological status quo. Such transformations impact constitutional values, as the state formulates its legal response to new technologies based on constitutional principles which meet market dynamics, and as it considers its own use of technologies in light of the limitation imposed by constitutional safeguards. The development of data collection, mining, and algorithmic analysis, resulting in predictive profiling – with or without the subsequent potential manipulation of the attitudes and behaviours of users – present unique challenges to constitutional law at the doctrinal as well as theoretical levels.
Constitutions have been designed to limit public (more precisely governmental) powers and protect individuals against any abuse from the state. The shift of power from public to private hands requires rethinking and, in case, revisiting some well-established assumptions. Moreover, during the rise of the bureaucratic state, the technologies for infringing liberty or equality were thought to be containable by the exercise of concrete judicial review (either constitutional or administrative), abstract judicial review, or a combination of the above. In recent years, however, the rise of the algorithmic society has led to a paradigmatic change where public power is no longer the only source of concern for the respect of fundamental rights and the protection of democracy, where jurisdictional boundaries are in flux, and where doctrines and procedures developed in the pre-cybernetic age do not necessarily capture rights violations in a relevant time frame. This requires either the redrawing of the constitutional boundaries so as to subject digital platforms to constitutional law or to revisit the relationship between constitutional law and private law, including the duties of the state to regulate the cybernetic complex, within or outside the jurisdictional boundaries of the state. Within this framework, the rise of digital private powers challenges the traditional characteristics of constitutional law, thus encouraging to wonder how the latter might evolve to face the challenges brought by the emergence of new forms of powers in the algorithmic society.
The primary goal of this chapter is to introduce the constitutional challenges coming from the rise of the algorithmic society. Section 1.2 examines the challenges for fundamental rights and democratic values, with a specific focus on the right to freedom of expression, privacy, and data protection. Section 1.3 looks at the role of constitutional law in relation to the regulation and policy of the algorithmic society. Section 1.4 examines the role and responsibilities of private actors underlining the role of constitutional law in this field. Section 1.5 deals with the potential remedies which constitutional law can provide to face the challenges of the information society.
1.2 Fundamental Rights and Democratic Values
Algorithmic technologies seem to promise new answers and an increase of accuracy of decision-making, thus offering new paths to enrich human knowledge.Footnote 16 Predictive models can help public administrations provide more efficient public services and spare resources. Likewise, citizens can rely on more sophisticated platforms allowing them to express their identity, build social relationships, and share ideas. Therefore, these technologies can be considered an enabler for the exercise of rights and freedoms. Nonetheless, artificial intelligence technologies are far from perfect. Predictive models have already produced biased results and inaccurate outputs, leading to discriminatory results.Footnote 17 The implications deriving from the implementation of automated technologies may have consequences for individual fundamental rights, such as the right to self-determination, freedom of expression, and privacy, even at a collective level. It is worth stressing that the relationship between fundamental rights and democracy is intimate, and the case of freedom of expression and data protection underlines this bundle. Without the possibility of expressing opinions and ideas freely, it is not possible to define society as democratic. Likewise, without rules governing the processing of personal data, individuals could be exposed to a regime of private surveillance without a set of accountability and transparency safeguards. Among different examples, the moderation of online information and users’ profiling can be taken as two paradigmatic examples of the risks which these technologies raise for fundamental rights and democratic values.
The way in which we express opinions and ideas online has changed in the last twenty years. The Internet has contributed to shaping the public sphere. It would be a mistake to consider the new channels of communication just as threats. The digital environment has indeed been a crucial vehicle to foster democratic values like freedom of expression.Footnote 18 However, this does not imply that threats have not appeared on the horizon. Conversely, the implementation of automated decision-making systems is concerning for the protection of the right to freedom of expression online. To understand when automation meets (and influences) free speech, it would be enough to closely look at how information flows online under the moderation of online platforms. Indeed, to organise and moderate countless content each day, platforms also rely on artificial intelligence to decide whether to remove content or signal some expressions to human moderators.Footnote 19 The result of this environment is troubling for the rule of law from different perspectives. First, artificial intelligence systems contribute to interpreting legal protection of fundamental rights by de facto setting a private standard of protection in the digital environment.Footnote 20 Second, there is also an issue of predictability and legal certainty, since private determinations blur the lines between public and private standards. This leads us to the third point: the lack of transparency and accountability in the decision concerning freedom of expression online.Footnote 21 In other words, the challenge in this case is to measure compliance with the principle of the rule of law. Indeed, the implementation of machine learning technologies does not allow to scrutinising decisions over expressions which are still private but involve the public at large. With the lack of regulation of legal safeguards, online platforms will continue to be free to assess and remove speech according to their business purposes.
Within this framework, disinformation deserves special attention.Footnote 22 Among the challenges amplified by technology, the spread of false content online has raised concerns for countries around the world. The Brexit referendum and the ‘Pizzagate’ during the last US elections are just two examples of the power of (false) information in shaping public opinion. The relevance of disinformation for constitutional democracies can be viewed from two angles: the constitutional limits to the regulatory countermeasures and the use of artificial intelligence systems in defining the boundaries of disinformation and moderating this content. While for public actors the decision to intervene to filter falsehood online requires questioning whether and to what extent it is acceptable for liberal democracies to enforce limitations to freedom of expression to falsehood, artificial intelligences catalogue vast amounts of content, deciding whether they deserve to be online according to the policies implemented by unaccountable private actors (i.e., online platforms). This is a multifaceted question since each constitutional system paradigm adopts different paradigms of protection, even when they share the common liberal matrix, like in the case of Europe and the United States. In other words, it is a matter of understanding the limits of freedom of speech to protect legitimate interests or safeguard other constitutional rights.
Besides, the challenges of disinformation are not just directly linked to the governance of online spaces but also to their exploitation. We have experienced in recent years the rise of new (digital) populist narratives manipulating information for political purposes.Footnote 23 Indeed, in the political context, technology has proven to be a channel for vehiculating disinformation citizenship, democracy, and democratic values. By exploiting the opportunities of the new social media, populist voices have become a relevant part of the public debate online, as the political situations in some Member States show. Indeed, extreme voices at the margins drive the political debate. It would be enough to mention the electoral successes of Alternative für Deutschland in Germany or the Five Star Movement in Italy to understand how populist narratives are widespread no longer as an answer to the economic crisis but as anti-establishment movements fighting globalised phenomena like migration and proposing a constitutional narrative unbuilding democratic values and the principle of the rule of law.Footnote 24
The threats posed by artificial intelligence technologies to fundamental rights can also be examined by looking at the processing of personal data. Even more evidently, automated decision-making systems raise comparable challenges in the field of data protection. The massive processing of personal data from public and private actors leads individuals to be subject to increasingly intrusive interferences in their private lives.Footnote 25 Smart applications at home or biometric recognition technologies in public spaces are just two examples of the extensive challenges for individual rights. The logics of digital capitalism and accumulation make surveillance technologies ubiquitous, without leaving any space for individuals to escape. In order to build such a surveillance and profiling framework, automated decision-making systems also rely on personal data to provide output. The use of personal information for this purpose leads one to wonder whether individuals should have the right not to be subjected to a decision based solely on automated processing, including profiling which produces legal effects concerning him or her or similarly significantly affects him or her.Footnote 26 These data subjects’ rights have been primarily analysed from the perspective of the right to explanation. Scholars have pointed out possible bases for the right to explanation such as those provisions mandating that data subjects receive meaningful information concerning the logic involved, as well as the significance, and the envisaged consequences of the processing.Footnote 27
These threats would suggest looking at these technologies with fear. Nonetheless, new technologies are playing a disruptive role. Society is increasingly digitised, and the way in which values are perceived and interpreted is inevitably shaped by this evolution. New technological development has always led to conflicts between the risks and the opportunities fostered by its newness.Footnote 28 Indeed, the uncertainty in the novel situations is a natural challenge for constitutional democracies, precisely for the principle of the rule of law.Footnote 29 The increasing degree of uncertainty concerning the applicable legal framework and the exercise of power which can exploit technologies based on legal loopholes also lead one to wonder how to ensure due process in the algorithmic society. Therefore, the challenges at stake broadly involve the principle of the rule of law not only for the troubling legal uncertainty relating to new technologies but also as a limit against the private determination of fundamental rights protection the boundaries of protection of which are increasingly shaped and determined by machines. The rule of law can be seen as an instrument to measure the degree of accountability, the fairness of application, and the effectiveness of the law.Footnote 30 As Krygier observed, it also has the goal of securing freedom from certain dangers or pathologies.Footnote 31 The rule of law is primarily considered as the opposite of arbitrary public power. Therefore, it is a constitutional bastion limiting the exercise of authorities outside any constitutional limit and ensuring that these limits answer to a common constitutional scheme.
Within this framework, the increasing spread and implementation of algorithmic technologies in everyday life lead to wondering about the impact of these technologies on individuals’ fundamental rights and freedoms. This process may tend to promote a probabilistic approach to the protection of fundamental rights and democratic values. The rise of probability as the primary dogma of the algorithmic society raises questions about the future of the principle of rule of law. Legal certainty is increasingly under pressure by the non-accountable determination of automated decision-making technologies. Therefore, it is worth focusing on the regulatory framework which could lead to a balance between ensuring the protection of democratic values without overwhelming the private sector with disproportionate obligations suppressing innovation.
1.3 Regulation and Policy
Fundamental rights and democratic values seem to be under pressure in the information society. This threat for constitutional democracies might lead to wondering about the role of regulation and policy within the framework of algorithmic technologies. The debate about regulating digital technologies started with the questioning of consolidated notions such as sovereignty and territory.Footnote 32 The case of Yahoo v. Licra is a paradigmatic example of the constitutional challenges on the horizon in the early 2000s.Footnote 33 More precisely, some authors have argued that regulation based on geographical boundaries is unfeasible, so that applying national laws to the Internet is impossible.Footnote 34 Precisely, Johnson and Post have held that ‘events on the Net occur everywhere but nowhere in particular’ and therefore ‘no physical jurisdiction has a more compelling claim than any other to subject events exclusively to its laws’.Footnote 35 In the cyber-anarchic view, the rise of internet law would cause the disintegration of state sovereignty over cyberspace,Footnote 36 thus potentially making any regulatory attempt irrelevant for the digital environment. This was already problematic for the principle of the rule of law, since self-regulation of cyberspace would have marginalised legal norms, de facto undermining any guarantee.
These positions have partially shown their fallacies, and scholars have underlined how States are instead available to regulate the digital environment through different modalities,Footnote 37 along with how to solve the problem of enforcement in the digital space.Footnote 38 Nonetheless, this is not the end of the story. Indeed, in recent years, new concerns have arisen as a result of the increasing economic power that some business actors acquired in the digital environment, especially online platforms. This economic power was primarily the result of the potentialities of digital technologies and of the high degree of freedom recognised by constitutional democracies in the private sector.Footnote 39 The shift from the world of atoms to that of bits has led to the emergence of new players acting as information gatekeepers that hold significant economic power with primary effects on individuals’ everyday lives.Footnote 40
Within this framework, while authoritarian States have been shown to impose their powers online,Footnote 41 constitutional democracies have followed another path. In this case, public actors rely on the private sector as a proxy in the digital environment.Footnote 42 The role of the private sector in the digitisation of the public administration or the urban environment can be considered a paradigmatic relationship of collaboration between the public and private sectors. Likewise, States usually rely on the algorithmic enforcement of individual rights online, as in the case of the removal of illegal content like terrorism or hate speech.Footnote 43 In other words, the intersection between public and private leads one to wonder just how to avoid that public values are subject to the determinations of private business interests. The Snowden revelations have already underlined how much governments rely on Internet companies to extend their surveillance programmes and escape accountability.Footnote 44 Even if public actors do not act as participants in the market or a regulator, they operate through an ‘invisible handshake’ based on the cooperation between market forces and public powers.Footnote 45
This situation leads constitutional democracies to adopt liberal approaches to the digital environment, with the result that self-regulation plays a predominant role. Ordo-liberal thinking considers the market and democracy as two intimate forces. Nonetheless, when market logics and dynamics based on the maximisation of profit and private business purposes prevail over the protection of individuals’ fundamental rights and freedoms, it is worth wondering about the role of regulation in mitigating this situation. The challenges raised by the implementation of artificial intelligence technologies compels to define what the proper legal framework for artificial intelligence requires. The creation of a hard law framework rather than of a soft law one is not without consequences. Both options offer a variety of benefits but also suffer from disadvantages, which should be taken into account when developing a framework for artificial intelligence systems.
Technology is also an opportunity, since it can provide better systems of enforcement of legal rules but also a clear and reliable framework compensating the fallacies of certain processes.Footnote 46 There is thus no definitive ‘recipe’ for protecting democratic values, but there are different means to achieve this result, among which there is also technology. Indeed, new technologies like automation should not be considered as a risk per se. The right question to ask instead is whether new technologies can encourage arbitrary public power and challenges for the rule of law.Footnote 47 The challenges to fundamental rights raised by these technologies would lead one to avoid approaches based on self-regulation. This strategy may not be sufficient to ensure the protection of fundamental rights in the information society. At the same time, it is well-known that hard law can represent a hurdle to innovation, leading to other drawbacks for the development of the internal market, precisely considering the global development of algorithmic technologies. In the case of the European proposal for the Artificial Intelligence Act,Footnote 48 the top-down approach of the Union, which aims to leave small margins to self-regulation, might be an attempt to protect the internal market from algorithmic tools which would not comply with the European standard of protection. Rather than making operators accountable for developing and implementing artificial intelligence systems, the regulation aims to prevent the consolidation of external standards.
Therefore, a fully harmonised approach would constitute a sound solution to provide a common framework and avoid fragmentation, which could undermine the aim of ensuring the same level of protection of fundamental rights. Besides, co-regulation in specific domains could ensure that public actors are involved in determining the values and principles underpinning the development of algorithmic technologies while leaving the private sector room to implement these technologies under the guidance of constitutional principles. The principle of the rule of law constitutes a clear guide for public actors which intend to implement technologies for public tasks and services. To avoid any effect on the trust and accountability of the public sector, consistency between the implementation of technology and the law is critical for legal certainty. Nonetheless, it is worth stressing that this is not an easy task. Even when legislation is well designed, limiting public power within the principle of legality could be difficult to achieve from different perspectives, like the lack of expertise or the limited budget to deal with the new technological scenario.Footnote 49 Besides, with the lack of any regulation, private actors are not required to comply with constitutional safeguards. In this case, the threats for the principle of the rule of law are different and linked to the possibility that private actors develop a set of private standards clashing with public values, precisely when their economic freedoms turn into forms of power.
The COVID-19 pandemic has highlighted the relevance of online platforms in the information society. For instance, Amazon provided deliveries during the lockdown phase, while Google and Apple offered their technology for contact-tracing apps.Footnote 50 These actors have played a critical role in providing services which other businesses or even the State had failed to deliver promptly. The COVID-19 crisis has led these actors to become increasingly involved in our daily lives, becoming part of our social structure.
Nonetheless, commentary has not been exclusively positive. The model of the contact-tracing app proposed by these tech giants has raised various privacy and data protection concerns.Footnote 51 The pandemic has also shown how artificial intelligence can affect fundamental rights online without human oversight. Once Facebook and Google sent their moderators home, the effects of these measures extended to the process of content moderation, resulting in the suspension of various accounts and the removal of some content, even though there was no specific reason for it.Footnote 52 This situation not only affected users’ right to freedom of expression but also led to discriminatory results and to the spread of disinformation, thus pushing one to wonder about the roles and responsibilities of private actors in the information society.
1.4 The Role and Responsibilities of Private Actors
At the advent of the digital era, the rise of new private actors could be seen merely as a matter of freedom. The primary legal (but also economic) issue thus was that of protecting such freedom while, at the same time, preventing any possible abuse thereof. This is the reason why competition law turned out to be a privileged tool in this respect,Footnote 53 sometimes in combination with ex ante regulation. Constitutional democracies have adopted a liberal approach – for instance, exempting online intermediaries from liability and providing a minimum regulation to ensure a common legal environment for circulating personal data.Footnote 54 Such an approach was aimed at preserving a new environment, which, at the end of the last century, seemed to promise a new phase of opportunities.
Thanks to minimum intervention in the digital environment, the technological factor played a crucial role. The mix of market and automated decision-making technologies has led to the transformation of economic freedoms into something that resembles the exercise of powers as vested in public authorities. The implementation of algorithmic technologies to process vast amounts of information and data is not exclusively a matter of profits any longer. Such a power can be observed from many different perspectives, like in the field of competition law, as economic and data power.Footnote 55 For the purposes of constitutional law, the concerns are instead about forms of freedoms which resemble the exercise of authority. The development of new digital and algorithmic technologies has led to the rise of new opportunities to foster freedom but also to the consolidation of powers proposing a private model of protection and governance of users. The freedom to conduct business has now turned into a new dimension, namely that of private power, which – it goes without saying – brings significant challenges to the role and tools of constitutional law.
One may actually wonder where the connection between algorithms and powers lies, apparently so far, but in fact, so close. To explain why these two expressions are connected, we argue that the implementation of the former on a large scale has the potential to give rise to a further transmutation of the classic role of constitutionalism and constitutional theory, in addition to that already caused by the shift from the world of atoms to the world of bits,Footnote 56 where constitutionalism becomes ‘digital constitutionalism’ and power is relocated between different actors in the information society.Footnote 57 This statement needs an attempt to clarification. As is well-known, constitutional theory frames powers as historically vested in public authorities, which by default hold the monopoly on violence under the social contract.Footnote 58 It is no coincidence that constitutional law was built around the functioning of public authorities. The goal of constitutions (and thus of constitutional law) is to allocate powers between institutions and to make sure that proper limits are set to constrain their action, with a view to preventing any abuse.Footnote 59 In other words, the original mission of constitutionalism was to set some mechanisms to restrict government power through self-binding principles, including by providing different forms of separation of powers and constitutional review. To reach this goal, it is crucial to focus on the exploration of the most disruptive challenges which the emergence of private powers has posed to the modern constitutional state and the various policy options for facing said transformations. This requires questioning the role that constitutions play in the information society and leads one to investigate whether constitutions can and should do something in light of the emergence of new powers other than those exercised by public authorities. Our claim is that if constitutions are meant as binding on public authorities, something new has to be developed to create constraints on private actors.
Therefore, focusing on the reasons behind the shift from freedom to conduct business to private power becomes crucial to understanding the challenges for constitutional law in the algorithmic society. Private actors other than traditional public authorities are now vested with some forms of power that are no longer economic in nature. The apparently strange couple ‘power and algorithms’ does actually make sense and triggers new challenges in the specific context of democratic constitutionalism. Algorithms, as a matter of fact, allow to carry out activities of various nature that may significantly affect individuals’ rights and freedoms. Individuals may not notice that many decisions are carried out in an automated manner without, at least prima facie, any chance of control for them. A broad range of decision-making activities are increasingly delegated to algorithms which can advise and in some cases make decisions based on the data they process. As scholars have observed, ‘how we perceive and understand our environments and interact with them and each other is increasingly mediated by algorithms’.Footnote 60 In other words, algorithms are not necessarily driven by the pursuit of public interests but are instead sensitive to business needs. Said concerns are even more serious in light of the learning capabilities of algorithms, which – by introducing a degree of autonomy and thus unpredictability – are likely to undermine ‘accountability’ and the human understanding of the decision-making process. For instance, the opacity of algorithms is seen by scholars as a possible cause of discrimination or differentiation between individuals when it comes to activities such as profiling and scoring.Footnote 61
In the lack of any regulation, the global activity of online platforms contributes to producing a para-legal environment on a global scale competing with States’ authorities. The consolidation of these areas of private power is a troubling process for democracy. Indeed, even if, at a first glance, democratic States are open environments for pluralism flourishing through fundamental rights and freedoms, at the same time their stability can be undermined when those freedoms transform into new founding powers overcoming basic principles such as the respect of the rule of law. In this situation, there is no effective form of participation or representation of citizens in determining the rules governing their community. In other words, the creation of a private legal framework outside any representative mechanism is a threat to democracy due to the marginalisation of citizens and their representatives from law-making and enforcement. This situation shows why it is important to focus on the constitutional remedies to solve the imbalances of powers in the algorithmic society.
1.5 Constitutional Remedies
Within this troubling framework for the protection of fundamental rights and democracies, constitutional law could provide two paths. The first concerns the possible horizontal application of fundamental rights vis-à-vis private parties. The second focuses instead on the path that could be followed in the new season of digital constitutionalism and on a constellation of new rights that could be identified to deal with the new challenges posed by algorithms.
A good starting point is Alexy’s assumption that the issue of the horizontal effect of fundamental rights protected by constitutions (and bills of rights) cannot be detached in theoretical terms from the more general issue of the direct effect of those rights.Footnote 62 In other words, according to the German legal theorist, once it is recognised that a fundamental right has a direct effect, that recognition must be characterised by a dual dimension. The first vertical dimension concerns the classic relationship of ‘public authority vs individual freedom’, while the second horizontal dimension focuses on the relationship between privates but also, as mentioned previously, the much less classic relationship between new private powers and individuals/users.
The problem with Alexy’s assumption, which is quite convincing from a theoretical point of view, is that the shift from the Olympus of the legal theorist to the arena of the law in action risks neglecting the fact that the approach of courts from different jurisdictions might be quite different, as far as the concrete recognition of the horizontal effect of fundamental rights is concerned. This should not come as any surprise because the forms and limits of that recognition depend on the cultural and historical crucible in which a specific constitutional order is cultivated.
As far as the United States is concerned, the state action doctrine apparently precludes any possibility to apply the US Federal Bill of Rights between private parties and consequently any ability for individuals to rely on such horizontal effects, and accordingly to enforce fundamental rights vis-à-vis private actors.Footnote 63 The reason for this resistance to accepting any general horizontal effect to the rights protected by the US Federal Bill of Rights is obviously that the cultural and historical basis for US constitutionalism is rooted in the values of liberty, individual freedom, and private autonomy. The state action doctrine is critical to understanding the scope of the rights enshrined in the US Constitution. Indeed, were the fundamental rights protected by the US Constitution to be extended to non-public actors, this would result in an inevitable compression of the sphere of freedom of individuals and, more generally, private actors. For instance, such friction is evident when focusing on the right to free speech, which can only be directly enforced vis-à-vis public actors. Historically, the state action doctrine owes its origins to the civil rights cases, a series of rulings dating back to 1883 in which the US Supreme Court recognised the power of the US Congress to prohibit racially based discrimination by private individuals in the light of the Thirteenth and Fourteenth Amendments. Even in the area of freedom of expression, the US Supreme Court extended the scope of the First Amendment to include private actors on the grounds where they are substantially equivalent to a state actor.
In Marsh v. Alabama,Footnote 64 the US Supreme Court held that the State of Alabama had violated the First Amendment by prohibiting the distribution of religious material by members of the Jehovah’s Witness community within a corporate town which, although privately owned, could be considered to perform a substantially recognisable ‘public function’ in spite of the fact that, formally speaking, it was privately owned. In Amalgamated Food Emps. Union Local 590 v. Logan Valley Plaza,Footnote 65 the US Supreme Court considered a shopping centre similar to the corporate town in Marsh. In Jackson v. Metropolitan Edison,Footnote 66 the US Supreme Court held that equivalence should be assessed in the exercise of powers traditionally reserved exclusively to the state. Nonetheless, in Manhattan Community Access Corp. v. Halleck,Footnote 67 the US Supreme Court more recently adopted a narrow approach to the state action doctrine, recalling in particular, its precedent in Hudgens v. NLRB.Footnote 68
This narrow approach is also the standard for protecting fundamental rights in the digital domain, and consequently, the US Supreme Court seemingly restricts the possibility of enforcing the free speech protections enshrined in the First Amendment against digital platforms, as new private powers.Footnote 69 More specifically, and more convincingly, it has been observed by Berman that the need to call into question the implications of a radical state action doctrineFootnote 70 can lead, in the digital age, to the transformation of cyberspace into a totally private ‘constitution free zone’.Footnote 71 Balkin has recently highlighted a shift in the well-established paradigm of free speech, described as a triangle involving nation-states, private infrastructure, and speakers.Footnote 72 In particular, digital infrastructure companies must be regarded as governors of social spaces instead of mere conduit providers or platforms. This new scenario, in Balkin’s view, leads to a new school of speech regulation triggered by the dangers of abuse by the privatised bureaucracies that govern end-users arbitrarily and without due process and transparency; it also entails the danger of digital surveillance which facilitates manipulation.Footnote 73
Despite the proposal that a ‘functional approach’ be adoptedFootnote 74 and partial attempts to reveal the limits on fully embracing the state action doctrine in the digital age, the US Supreme Court recently confirmed in its case law the classic view of the intangibility of the state action doctrine.Footnote 75 However, even one of the US scholars who is more keenly aware of the de facto public functions carried out by the digital platforms concedes that
however important Facebook or Google may be to our speech environment, it seems much harder to say that they are acting like the government all but in name. It is true that one’s life may be heavily influenced by these and other large companies, but influence alone cannot be the criterion for what makes something a state actor; in that case, every employer would be a state actor, and perhaps so would nearly every family.Footnote 76
Shifting from the United States to Europe, the relevant historical, cultural, and consequently constitutional milieu is clearly very different. The constitutional keyword is Drittwirkung, a legal concept originally developed in the 1950s by the German Constitutional Court,Footnote 77 presuming that an individual plaintiff can rely on a national bill of rights to sue another private individual alleging the violation of those rights. In other words, it can be defined as a form of horizontality in action or a total constitution.Footnote 78 It is a legal concept that, as mentioned, has its roots in Germany and then subsequently migrated to many other constitutional jurisdictions, exerting a strong influence even on the case law of the CJEU and ECtHR.Footnote 79
It should not come as any surprise that a difference emerged between the US and European constitutional practices in regard to the recognition of horizontal effects on fundamental rights. As previously noted, individual freedom and private autonomy are not constitutionally compatible with such recognition. On the other hand, however, human dignity as a super-constitutional principle supports such recognition, at least in theory.Footnote 80 The very concept of the abuse of rights, which is not recognised under US constitutional law, while instead being explicitly codified in the ECHR and the EUCFR,Footnote 81 seems to reflect the same Euro-centric approach.
In the light of this scenario, it is no coincidence that, as early as 1976, the CJEU decided in Defrenne II to acknowledge and enforce the obligation for private employers (and the corresponding right of employees) to ensure equal pay for equal work, in relation to a provision of the former Treaty establishing the European Economic Community.Footnote 82 Article 119 of the EC Treaty was unequivocally and exclusively addressed to Member States. It provided that ‘each Member State shall ensure that the principle of equal pay for male and female workers for work of equal value is applied’. When compared to the wording of that provision, it could be observed that each provision of the EUCFR is more detailed and, therefore, more amenable to potential horizontal direct effect. It is no coincidence that, in 2014, while in AMS the CJEU adopted a minimalist approach to the possible horizontal direct effect only of those provisions of the EU Charter of Fundamental Rights from which it could derive a legal right for individuals and not simply a principle, it also applied Articles 7 and 8 EUCFR in relation to the enforcement of digital privacy rights, specifically against search engines in Google Spain.Footnote 83
Several years later, the CJEU had the opportunity to further develop the horizontal application of the EUCFR. More specifically, in four judgments from 2018 – Egenberger,Footnote 84 IR v. JQ,Footnote 85 Bauer,Footnote 86 and Max PlanckFootnote 87 – the CJEU definitively clarified the horizontal scope of Articles 21, 31(2), and 47 of the EUCFR within disputes between private parties.Footnote 88 In the light of the emerging scenario, it seems clear that a potential initial answer to the new challenges of constitutional law in the age of new private powers could be found in the brave horizontal enforcement of fundamental rights, especially in the field of freedom of expression and privacy and data protection.
However, as mentioned previously, it is also worth reaching beyond the debate about the horizontal/vertical effects of fundamental rights in the digital age in order to suggest an alternative weapon for the challenges that will need to be faced during the new round of digital constitutionalism. Most notably, it is necessary to design a frame that describes the relationship between the three parties that Balkin puts at the heart of the information society: platforms, states, and individuals.Footnote 89 In other words, a digital habeas corpus of substantive and procedural rights should be identified, which can be enforced by the courts as they are inferred from existing rights protected under current digital constitutionalism.Footnote 90 Therefore, a new set of rights can be derived by such revisited understanding of individuals in the new digital context – among others, the right that decisions impacting the legal and political sphere of individuals are undertaken by human beings, and not exclusively by machines, even the most advanced and efficient ones.
The significant shift of paradigm that individuals are witnessing in their relationship with power thus requires to revisit their traditional status and to focus on a set of rights that can be enforced vis-à-vis not only governmental powers but also private actors. In particular, hard law could certainly play a role in order to remedy the lack of fairness, transparency, and accountability which appears as the most important challenge to face in respect of the implementation of algorithmic systems. Although ensuring transparency could be complex, for multiple reasons such as trade secrets, it is possible to mitigate this issue by granting different forms of transparency and defining some procedural safeguards, which online platforms should abide by when making decisions which, otherwise, would be deprived of any public guarantee. While substantive rights concern the status of individuals as subjects of a kind of sovereign power that is no longer exclusively vested in public authorities, procedural rights stem from the expectation that individuals have of claiming and enforcing their rights before bodies other than traditional jurisdictional bodies, which employ methods different from judicial discretion, such as technological and horizontal due process. As a result of this call for algorithmic accountability, a new set of substantive and procedural rights would constitute an attempt to remedy the weakness and the transparency gap that individuals suffer from their technologically biased relationship with private actors and the lack of any bargaining power.
The right to explanation is only just one of the new rights that could contribute to mitigating the lack of fairness, transparency, and accountability in automated decision-making. Indeed, together with the right to obtain information on the way their data are being processed, individuals should also rely on a right to easy access (right to accessibility) and on a right to obtain translation from the language of technology to the language of human beings. While the former is meant as the right to be provided with the possibility to interact with algorithms and digital platforms implementing the use thereof, the latter requires the use of simple, clear, and understandable information and allows users not only to rely on, for example, the reasons for the removal of online content, but also to better exercise their rights before a judicial or administrative body.
These substantive rights find their justification in the ‘hidden price’ that individual users pay to digital platforms, while enjoying their services apparently free of charge – a cost that is not limited to personal data. Human behaviours, feelings, emotions, and political choices as well have a value for algorithms, most notably to the extent that they help machines learn something about individual reactions based on certain inputs. The new set of rights seems to respond to Pasquale’s questions about the transparency gap between users and digital platforms:
Without knowing what Google actually does when it ranks sites, we cannot assess when it is acting in good faith to help users, and when it is biasing results to favour its own commercial interests. The same goes for status updates on Facebook, trending topics on Twitter, and even network management practices at telephone and cable companies. All these are protected by laws of secrecy and technologies of obfuscation.Footnote 91
If, on the one hand, this new digital pactum subjectionis requires new rights being recognised and protected, it is also necessary to understand how their enforcement can be effective and how they can actually be put into place. This new set of substantive rights is associated with the need for certain procedural guarantees that allow individuals to ensure that these expectations can actually be met. Therefore, it is necessary to investigate also the ‘procedural counterweight’ of the creation of new substantive rights, focusing on the fairness of the process through which individuals may enforce them. Indeed, since within existing literature the focus up to date has been on the exercise of powers, there is no reason to exclude from the scope of application of procedural guarantees those situations where powers are conferred upon private bodies charged with the performance of public functions.Footnote 92
Digital platforms can be said to exercise administrative powers which are normally vested in public authorities. However, looking at the way rights can be exercised vis-à-vis these new actors, vagueness and opacity can still be noticed in the relevant procedures. Among others, the right to be forgotten shows in a clear way the lack of appropriate procedural safeguards, since steps such as the evaluation of the requests of delisting and the adoption of the relevant measures (whether consisting of the removal of a link or of the confirmation of its lawfulness) entirely rely on a discretionary assessment supported by the use of algorithms. Therefore, the mere horizontal application of the fundamental right to protection of personal data enshrined in Article 8 of the Charter of Fundamental Rights of the European Union does not prove to be satisfactory. Also, the notice and takedown mechanisms implemented by platforms hosting user-generated content and social networks do not entirely meet the requirements of transparency and fairness that make the status of users/individuals enforcing their rights vis-à-vis them comparable to the status of citizens exercising their rights against public authorities.
In order for these new substantive rights to be actually protected, and made enforceable vis-à-vis the emerging private actors, procedural rights play a pivotal role. Crawford and Schultz have explored the need to frame a ‘procedural data due process’.Footnote 93 The application of such a technological due process would also impact the substantive rights, as they should preserve, in accordance with the Redish and Marshall model of due process, values such as accuracy; appearance of fairness; equality of inputs; predictability, transparency, and rationality; participation; revelation; and privacy-dignity.Footnote 94 The due process traditional function of keeping powers separate has to be fine-tuned with the specific context of algorithms, where interactions occur between various actors (algorithm designers, adjudicators, and individuals). Citron has pointed out some requirements that automated systems should meet in order to fulfil the procedural due process, including (a) adequate notice to be given to individuals affected by the decision-making process; (b) opportunity for individuals of being heard before the decision is released; (c) and record, audits, or judicial review.Footnote 95 According to Crawford and Schultz’s model of procedural data due process, the notice requirement can be fulfilled by providing individuals with ‘an opportunity to intervene in the predictive process’ and to know (i.e., to obtain an explanation about) the type of predictions and the sources of data. Besides, the right to being heard is seen as a tool for ensuring that once data are disclosed, individuals have a chance to challenge the fairness of the predictive process. The right to being heard thus implies having access to a computer program’s source code, or to the logic of a computer program’s decision. Lastly, this model requires guarantees of impartiality of the ‘adjudicator’, including judicial review, to ensure that individuals do not suffer from any bias while being subject to predictive decisions.
The proposal for the Digital Services Act provides an example of these procedural safeguards limiting platforms’ powers.Footnote 96 With the goal of defining a path towards the digital age, the proposal maintains the rules of liability for online intermediaries, now established as the foundation of the digital economy and instrumental to the protection of fundamental rights. In fact, based on the proposal, there will no changes in the liability system but rather some additions which aim to increase the level of transparency and accountability of online platforms. It is no coincidence that, among the proposed measures, the DSA introduces new obligations of due diligence and transparency with particular reference to the procedure of notice and takedown and redress mechanisms.
1.6 Conclusions
Algorithmic systems have contributed to the introduction of new paths for innovation, thus producing positive effects for society as a whole, including fundamental rights and freedoms. Technology is also an opportunity for constitutional democracies. Artificial intelligence can provide better systems of enforcement of legal rules or improve the performance of public services. Nonetheless, the domain of inscrutable algorithms characterising contemporary society challenges the protection of fundamental rights and democratic values while encouraging lawmakers to find a regulatory framework balancing risk and innovation, considering the role and responsibilities of private actors in the algorithmic society.
The challenges raised by artificial intelligence technologies are not limited to freedom of expression, privacy, and data protection. Constitutional democracies are under pressure to ensure legal certainty and predictability of automated decision-making processes which can collectively affect democratic values. Individuals are increasingly surrounded by ubiquitous systems that do not always ensure the possibility of understanding and controlling their underlying technologies. Leaving algorithms without any safeguards would mean opening the way towards techno-determinism, allowing the actors who govern these automated systems to arbitrarily determine the standard of protection of rights and freedoms at a transnational level under the logics of digital capitalism. This is why it is critical to understand the role of regulation in the field of artificial intelligence, where cooperative efforts between the public and private sector could lead to a balanced approach between risk and innovation. Constitutional democracies cannot leave private actors to acquire areas of power outside constitutional limits.
Within this framework, both the horizontal effect doctrines and new substantive and procedural rights seem to be promising candidates among the available remedies. In the face of these challenges, it is likely that ius dicere will by no means lose its predominant role over political power acquired in recent years. The challenges raised by new automated technologies are likely to operate as a call for courts to protect fundamental rights in the information society while increasing pressures on lawmakers to adopt new rights and safeguards.Footnote 97 It is conceivable that, despite the codification of new safeguards, the role of courts in interpreting the challenges raised by new technologies is far from being exhausted, also due to the role of online platforms. Indeed, artificial intelligence technologies have raised different questions concerning the protection of fundamental rights, which still have not been answered through the political process. We have seen how constitutional law can provide some solutions to these new challenges. Nonetheless, in the absence of any form of regulation, the role of courts is likely to be predominant. The COVID-19 pandemic has only amplified this dynamic. It has confirmed the role of legislative inertia in the face of the new challenges associated with the implementation of technology and the increasing role of online platforms in providing services and new solutions to combat the global pandemic.
Therefore, the primary challenge for constitutional democracies in the algorithmic society might be to limit the rise of global private powers replacing democratic values with private determinations. This does not entail intervening in the market or adopting a liberal approach, but involves defining a constitutional framework where public and private powers are bound by safeguards and procedures.