13.1 Context – New Wine in Old Bottles?
The major focus of the book is on the constitutional challenges of the algorithmic society. In the public/private divide type of thinking, such an approach puts the constitution and thereby the state into the limelight. There is a dense debate on the changing role of the nation-state in the aftermath of what is called globalization and how the transformation of the state is affecting private law and thereby private parties.Footnote 1 This implies the question of whether the public/private divide can still serve as a useful tool to design responsibilities on both sides, public and private.Footnote 2 If we ask for a constitutional framing of business activities in a globalized world, there are two possible approaches: the first is the external or the outer reach of national constitutions; the second the potential impact of a global constitution. Our approach is broader and narrower at the same time. It is broader as we do not look at the constitutional dimension alone, but at the public/private law below the constitution and at the role and impact on private responsibilities, it is narrower as we will neither engage in the debate on the external/outer reach of nation-state constitutions nor on the existence of a ‘Global Constitution’ or an ‘International Economic Constitution’, based on the GATT/WTO and international human rights.Footnote 3 Such an exercise would require a discussion about global constitutionalization and global constitutionalism in and through the digital society and digital economy.Footnote 4
Therefore, this contribution does not look at private parties through the lenses of the constitutions or constitutionalization processes but through the lenses of private parties, here companies. The emphasis is on the responsibilities of private companies, which does not mean that there is no responsibility of nation-states. Stressing private responsibilities below the surface of the constitution directs the attention to the bulk of national, European and international rules that are and that have been developed in the last decades and that in one way or the other are dealing with responsibility or perhaps even better responsibilities of private and public actors. Responsibility is a much broader term than legal civil liability as it includes the moral dimension,Footnote 5 which might or might not give space to give private responsibility a constitutional outlook or even more demanding a constitutional anchoring, be it in a nation-state constitution, the European or even the Global Constitution.Footnote 6 The culmination point of the constitutional debate is the question of whether human rights are addressing states alone or also binding private parties directly.Footnote 7 Again, this is not our concern. The focus is on the level below the constitution, the ‘outer space’ where private parties and public – mainly administrative – authorities are co-operating in the search for solutions that strike a balance between the freedom of private companies to do business outside state borders and their responsibility as well as those of the nation-states.
The intention is to deliver a rough overview of where we are standing politically, economically and legally, when we are discussing possible legal solutions that design the responsibility of private companies in the globalized economy. This is done against the background of Baldwin’sFootnote 8 structuring of the world trade order along the line of the decline of first transportation costs and second communication costs. The two stages can be associated with two very different forms of world trade. The decline of transportation enabled the establishment of the post–World War II order. Products and services could circulate freely without customs and non-tariff barriers to trade. The conditions under which the products were manufactured, however, were left to the nation-states. This allowed private companies to benefit from the economies of scales, from differences between labour costs and later environmental costs. The decline of communication costs changed the international trade order dramatically. It enabled the rise of global value chains often used as a synonym for global value chains. Here product and process regulation are interlinked through contract.Footnote 9 It will have to be shown that the two waves show superficially regarded similarities, economically and technologically, though there are differences which affect the law, and which will have to be taken into account when it comes to the search for solutions.
13.2 The First Wave – Double Standards in Unsafe Products and Unsafe Industrial Plants
Timewise, we are in the 1960s, 1970s. International trade is blossoming. The major beneficiaries are Western democratic states and multinationals, as they were then called. Opening the gateway towards the responsibility of multinationals ‘beyond the nation-state’Footnote 10 takes the glamour away from the sparkling language of the algorithmic economy and society and discloses a well-known though rather odd problem which industrialized states had to face hand in hand with the rise of the welfare state in whatever form and the increase of protective legislation to the benefit of consumers, of workers and of the environment against unsafe products.
13.2.1 Double Standards on the Export of Hazardous Products
The Western democratic states restricted the reach of the regulation of chemicals, pharmaceuticals, pesticides and dangerous technical goods to their territory, paving the way for their industries to export products to the rest of the world, although their use was prohibited or severely restricted in the home country. The phenomenon became known worldwide as the policy of ‘double standards’ and triggered political awareness around the globe, in the exporting and importing states, in international organizations and in what could be ambitiously called an emerging global society.Footnote 11 Communication costs, however, determined the search for political solutions. It has to be recalled that until the 1980s telephone costs were prohibitive, fax did not yet exist and the only way to engage in serious exchange was to meet physically. The decrease in transportation costs rendered the international gathering possible. The level of action to deal with ‘double standards’ was first and foremost political.
The subject related international organizations, WHO with regard to pharmaceuticals, UNEP and FAO with regard to chemicals, pesticides, waste and the later abolished UN-CTC with regard to dangerous technical goods invested into the elaboration of international standards on what is meant to be ‘hazardous’ and equally pushed for international solutions tying the export of double-standard products to the ‘informed’ consent of the recipient states. Within the international organizations, the United States dominated the discussions and negotiations. That is why each and every search for a solution was guided by the attempt to seek the support of the United States, whose president was no longer Jimmy Carter but Ronald Reagan. At the time, the European Union (EU) was near to non-existent in the international sphere, as it had not yet gained the competence to act on behalf of the Member States or jointly with the Member States. The Member States were speaking for themselves, built around two camps: the hard-core free-trade apologists and the softer group of states that were ready to join forces with voices from what is called today the Global South, seeking a balance between free trade and labour, consumer and environmental protection. Typically, the controversies ended in soft law solutions, recommendations adopted by the international organizations if not unanimously but at the minimum with the United States abstaining.
There is a long way from the recommendations adopted in the mid-1980s and the Rotterdam Convention on the export of hazardous chemicals and pesticides adopted in 1998, which entered into force in 2004.Footnote 12 On the bright side, there is definitely the simple fact that multilateralism was still regarded as the major and appropriate tool for what was recognized as a universal problem, calling for universal solutions. However, there is also a dark side to be taken into consideration. The UN organizations channelled the political debate on double standards, which was originally much more ambitious. NGOs, environmental and consumer organizations, and civil society activists were putting political pressure on the exporting countries to abolish the policy of double standards. The highly conflictual question then was and still is, ‘Is there a responsibility of the exporting state for the health and safety of the citizens of the recipient countries?’ Is there even a constitutional obligation of nation-states to exercise some sort of control over the activities of ‘their’ companies, who are operating from their Western Homebase in the rest of the world? How far does the responsibility/obligation reach? If double standards are legitimate, are nation-states at least constitutionally bound to elaborate and to ensure respect for internationally recognized standards on the safety of products, of health and safety at work, as well as environmental protection?
The adoption of the Rotterdam Convention suffocated the constitutional debate and shifted the focus towards its ratification. The juridification of a highly political conflict on double standards ends in a de-politicization. The attention shifted from the public political fora to the legal fora. The Member States of the EU and the EU ratified the Convention through EU Regulation 304/2003, later 698/2008, today 649/2012.Footnote 13 The United States signed the Convention but never ratified it. In order to be able to assess the potential impact of the Rotterdam Convention or more narrowly the role and function of the implementing EU Regulation on the European Member States, one has to dive deep into the activities of the European Chemical Agency, where all the information from the Member States is coming together.Footnote 14 When comparing the roaring public debate on double standards with the non-existent public interest in its bureaucratic handling, one may wonder to what extent ‘informed consent’ has improved the position of the citizens in the recipient state. The problem of double standards has not vanished at all.Footnote 15
13.2.2 Double Standards on Industrial Plants
The public attention seems to focus ever stronger on catastrophes which shatter the global world order – from time to time, but with a certain regularity. The level of action is not necessarily political or administrative; it is judicial. The eyes of the victims but also of NGOs, civil society organizations, consumer and environmental organizations were and are directed towards the role and function of courts. Dworkin published his book on Law’s empire, where he relied on the ‘Hercules judge’ in 1986, exactly at a time, where even in the transnational arena national courts and national judges turned into key actors and had to carry the hopes of all those who were fighting against double standards. This type of litigation can be easily associated with Baldwin’s distinction. The decline of transportation costs allowed Western-based multinationals to build subsidiaries around the world. Due to the economies of scale, it was cheaper for the multinationals to get the products manufactured in the subsidiaries and ship them back to the Western world to get them assembled. Typically, the subsidiaries were owned by the mother company, having its business seat in the United States or in Europe, either fully or at least up to a 51 per cent majority.
Again, the story to tell is not new, but it is paradigmatic for the 1980s. In 1984, a US-owned chemical plant in Bhopal India exploded. Thousands of people died. The victims argued that the plant did not even respect the rather low Indian standards of health and safety at work and Indian environmental standards. They were seeking compensation from Union Carbide Corporation, the mother company, and launched tort action claims in the United States.Footnote 16 The catastrophe mobilized NGOs and civil society organizations, along with class-action lawyers in the United States who combined the high expectations of the victims with their self-interest in bringing the case before US courts. The catastrophe laid bare the range of legal conflicts which arise in North-South civil litigation. Is there a responsibility of US companies which are operating outside the US territory to respect the high standards of the export state or international minimum standards, if they exist? Or does it suffice to comply with the lower standards of the recipient state? Is the American mother company legally liable for the harm produced through its subsidiary to the Indian workers, the Indian citizens affected in the community and the Indian environment? Which is the competent jurisdiction, the one of the US or the one of India, and what is the applicable law, US tort and class action law with its high compensation schemes or the tort law of the recipient state? The litigation fell into a period where socio-legal research played a key role in the United States and where legal scholars heavily engaged in the litigation providing legal support to the victims. There was a heated debate even between scholars sympathizing with the victims of whether it would be better for India to instrumentalise Bhopal so as to develop the Indian judiciary through the litigation in India in accepting the risk that Indian courts could provide carte blanche to the American mother companies or whether the rights of the victims should be preserved through the much more effective and generous US law before US courts. One of the key figures was Marc Galanter from Wisconsin, who left the material collected over decades on the litigation in the United States and in India, background information on the Indian judiciary, and the role and function of American authorities to the Wisconsin public library.Footnote 17 It remains to be added that in 1986 the US district court declined jurisdiction of American courts as forum non conveniens and that the victims who had to refile their case before Indian courts were never adequately compensated – until today. There are variations of the Bhopal type of litigation; the last one so far which equally gained public prominence is Kiobel.Footnote 18
The political and legal debate on double standards which dominated the public and legal fora in the 1980s differs in two ways from the one we have today on the responsibility of private parties in the digital economy and society: first and foremost, the primary addressees of the call for action were the Western democratic states as well as international organizations. They were sought to find appropriate solutions for what could not be solved otherwise. There are few examples of existing case law on double standards. Bhopal, though mirroring the problem of double standards, is different due to the dimension of the catastrophe and to the sheer number of victims which were identifiable. It is still noteworthy though that the international community left the search for solutions in the hands of the American respectively the Indian judiciary and that there was no serious political attempt neither of the two states nor of the international community to seek extra-judicial compensation schemes. The American court delegated the problem of double standards back to the Indian state and the Indian society alone. Second, in the 1980s, human rights were not yet or at least to a much lesser extent invoked in the search for political as well as for judicial solutions. There was less emphasis on the ‘rights’ rhetoric, on consumer rights as human rights or the right to safety as a human right.Footnote 19 Health, safety and the environment were treated as policy objectives that had to be implemented by the states, either nationally or internationally. The 1980s still breathe a different spirit, the belief in and the hope for an internationally agreeable legal framework that could provide a sound compromise between export and import states or, put differently, between the free-trade ideology and the need for some sort of internationally agreeable minimum standards of protection.
13.3 The Second Wave – GAFAs and Global Value Chains (GVCs)
When it comes to private responsibilities in the digital economy and society, the attention is directed to the GAFAs, to what is called the platform economy and their role as gatekeepers to the market. Here competition law ties in. National competition authorities have taken action against the GAFAs under national and European competition law mainly with reference to the abuse of a dominant position.Footnote 20 The EU, on the other hand, has adopted Regulation 2019/1150Footnote 21 business to platforms in order to ‘create a fair, transparent and predictable business environment for smaller businesses and traders’, which entered into force on 20 July 2020. The von der Leyen Commission has announced two additional activities: a sector-specific proposal which is meant to fight down potential anti-competitive effects by December 2020 and a Digital Services Act which will bring amendments to the e-commerce Directive 2001/43/EEC probably also with regard to the rights of customers. While platforms hold a key position in the digital economy and society, they form in Baldwin’s scenario no more than an integral part of the transformation of the economic order towards GVCs. Platforms help reduce the communication cost, and they are opening up markets for small- and medium-sized companies in the Global South which had no opportunity to gain access to the market before the emergence of platforms.
The current chapter is not the ideal place to do justice to the various roles and functions of platforms or GVCs. There is not even an agreed-upon definition of platforms or GVCs. What matters in our context, is, however, to understand the GVCs as networks which are interwoven through a dense set of contractual relations, which cannot be reduced to a lead company that is organized by the chain upstream and downstream and that holds all the power in their hands. Not only the public attention but also the political attention is very much concentrated on the GAFAs and on multinationals, sometimes even identified and personalized. Steve Jobs served as the incarnation of Apple, and Mark Zuckerberg is a symbolic figure and even a public figure. The reference to the responsibility of private actors is in their various denominations, sociétés, corporations and multinationals. Digitization enabled the development of the platform economy. Communication costs were reduced to close to zero. Without digitalization and without the platforms, the great transformation of the global economy, as Baldwin calls it, would not have been possible. The results are GVCs being understood as complex networks, where SMEs equally may be able to exercise, let alone that the focus on the chain sets aside external effects of the contractualization on third parties.Footnote 22 That is why personalization of the GAFAs is as problematic as the desperate search for a lead company which can be held responsible upstream and downstream.Footnote 23
The overview of the more recent attempts internationally, nationally and the EU lay the ground for discussion. The idea of holding multinationals responsible for their actions in third countries, especially down the GVCs, has been vividly debated in recent years. Discussions have evolved to cover not only the protection of human rights but also environmental law, labour law and good governance in general. Developments in the field and the search for accountability have been led to political action at the international level, to legislative action at the national and European level and to litigation before national courts. Most of the initiatives fall short of an urgently needed holistic perspective, which takes the various legal fields into account, takes the network effects seriously and provides for an integrated regulation of due diligence in corporate law, of commercial practices, of standard terms and of the contractual and tortious liability, let alone the implications with regard to labour law, consumer law and environmental law within GVCs.Footnote 24
13.3.1 International Approaches on GVCs
In June 2011 the United Nations Human Rights Council unanimously adopted the Guiding Principles on Business and Human Rights (UNGPs). This was a major step towards the protection of Human Rights and the evolution of the concept of Social Corporate Responsibility. The adoption of the UNGPs was the result of thirteen years of negotiations. The year 2008 marked another step in the work of the Human Rights Council with the adoption of the framework ‘Protect, Respect and Remedy: A Framework for Business and Human Rights’.Footnote 25 The framework laid down three fundamental pillars: the duty of the state to protect against human rights violations by third parties, including companies; the responsibility of companies to respect human rights; and better access by victims to effective remedies, both judicial and non-judicial. The Guiding Principles, which are seen as the implementation of the Protect, Respect and Remedy Framework, further detail how the three pillars are to be developed. The Guiding Principles are based on the recognition of
[the] State’s existing obligations to respect, protect and fulfil human rights and fundamental freedoms; The role of business enterprises as specialized organs of society performing specialized function, required to comply with all applicable laws and to respect human rights; the need for rights and obligations to be matched to appropriate and effective remedies when breaches.Footnote 26
The Guiding Principles not only cover state behaviours but introduce a corporate responsibility to respect human rights as well as access to remedies for those affected by corporate behaviour or activities. Despite its non-binding nature, the UN initiative proves the intention to engage corporations in preventing negative impacts of their activities on human rights and in making good the damage they would nevertheless cause.
Here is not the place to give a detailed account of the initiative taken at the international level, but it is relevant to stop on the case of the OECD. The OECD worked closely with the UN Human Rights Council in elaborating the OECD Guidelines for Multinational Enterprises.Footnote 27 The guidelines especially introduced an international grievance mechanism. The governments that adhere to the guidelines are required to establish a National Contact Point (NPC) which has the task of promoting the OECD guidelines and handling complaints against companies that have allegedly failed to adhere to the Guidelines’ standards. The NCP usually acts as a mediator or conciliator in case of disputes and helps the parties reach an agreement.Footnote 28
13.3.2 National Approaches to Regulate GVCs
Not least through the international impact and the changing global environment, national legislators are becoming more willing to address the issue of the responsibility of corporations for their actions abroad from a GVC perspective. They focus explicitly or implicitly on a lead company which has to be held responsible. None have taken the network effects of GVS seriously. In 2010, California passed the Transparency in Supply Chains Act,Footnote 29 the same year the United Kingdom adopted the UK Bribery Act, which creates a duty for undertakings carrying an economic activity in Britain to verify there is no corruption in the supply chain.Footnote 30 The Bribery Act was then complemented by the UK Modern Slavery Act 2015, which focuses on human trafficking and exploitation in GVCs.Footnote 31 In the same line, the Netherlands adopted a law on the duty of care in relation to child labour, covering international production chains.Footnote 32 Complemented by EU instruments, such legislation is useful and constitutes a step forward, particularly at the political and legislative levels. Nevertheless, their focus on a sector, a product or certain rights does not enable the body of initiative to be mutually reinforcing. There is a crucial need for a holistic network-related approach to the regulation of GVCs.
Legislation on the responsibility of multinationals for human rights, environment or other harms is being designed in different countries. Germany and Finland have announced being in the process of drafting due diligence legislation.Footnote 33 Switzerland had been working on a proposal, led by NGOs and left parties. The initiative was put to the votation in the last days of November 2020. A total of 47 per cent of the population participated, of which 50.73 per cent voted ‘Yes’.Footnote 34 The project was rejected at the level of the cantons. Therefore this initiative will not go forward. At the time of writing, it seems to be a lighter initiative that will be discussed – one where responsibility is not imposed along the supply chain but for Swiss companies in third countries. The votation is nevertheless a performance in terms of the willingness to carry out such a project, participation and in terms of result. The result of the vote of the cantons can be partly explained by the lobby strategies multinationals have conducted from the beginning of the initiative.
The French duty of vigilance law was adopted in 2017 and introduced in the Code of Commerce among the provisions on public limited companies in the sub-part on shareholders assemblies.Footnote 35 They require shareholders of large public limited companies with subsidiaries abroad to establish a vigilance plan. A vigilance plan introduces vigilance measures that identify the risks and measures to prevent serious harm to human rights, health, security or environmental harm resulting from the activities of the mother company but also of the company it controls, its subcontractors and its suppliers. The text provides for two enforcement mechanisms. First, a formal notice (mise en demeure) can be addressed to the company that does not establish a vigilance plan or establishes an incomplete one. The company has three months to comply with its obligations. Second, there could be an action in responsibility (action en responsabilité) against the company. Here the company must repair the prejudice the compliance with its obligations would have avoided. French multinationals have already received letters of formal notice. This is the case of EDF and its subsidiary EDF Energies Nouvelles for human rights violations in Mexico.Footnote 36 The first case was heard in January 2020. It was brought by French and Ugandan NGOs against Total. The NGOs argue that the vigilance plan designed and put in place by Total is not in compliance with the law on due diligence and that the measures adopted to mitigate the risks are insufficient or do not exist at all.
13.3.3 The Existing Body of EU Approaches on GVCs and the Recent European Parliament Initiative
Sector-specific or product-specific rules imposed on GVCs have been adopted at the EU level and introduced due diligence obligations. The Conflict Minerals RegulationFootnote 37 and the Regulation of timber productsFootnote 38 impose obligations along the supply chain; the importer at the start of the GVC bears the obligations. The Directive on the Disclosure of Non-Financial and Diversity Information obliges large capital market-oriented companies to include in their non-financial statement information on the effects of the supply chain and the supply chain concepts they pursue.Footnote 39 The Market Surveillance Regulation extends the circle of obligated economic operators in the EU to include participants in GVCs, thus already regulating extraterritorially.Footnote 40 The Directive on unfair trading practices in the global food chain regulates trading practices in supply chains through unfair competition and contract law.Footnote 41 Although these bits and pieces of legislation introduce a form of due diligence along the supply chain, they remain product- or sector-specific, which prevents an overall legal approach to due diligence across sectors for all products. This concern is addressed by the latest Recommendation of the European Parliament.
Most recently, in September 2020, the JURI Committee of the European Parliament published a draft report on corporate due diligence and corporate accountability which includes recommendations for drawing up a Directive.Footnote 42 Although the European Parliament’s project has to undergo a number of procedures and discussions among the European institutions and is unlikely to be adopted in its current form, a few aspects are relevant for our discussion. Article 3 defines due diligence as follows:
‘[D]ue diligence’ means the process put in place by an undertaking aimed at identifying, ceasing, preventing, mitigating, monitoring, disclosing, accounting for, addressing, and remediating the risks posed to human rights, including social and labour rights, the environment, including through climate change, and to governance, both by its own operations and by those of its business relationships.
Following the model of the UN Guiding Principles, the scope of the draft legislation goes beyond human rights to cover social and labour rights, the environment, climate change and governance. Article 4 details that undertakings are to identify and assess risks and publish a risk assessment. This risk-based approach is based on the second pillar of the UN Guiding Principles; it is also followed in the French due diligence law. In case risks are identified, a due diligence strategy is to be established whereby an undertaking designs measures to stop, mitigate or prevent such risks. The firm is to disclose reliable information about its GVC, namely, names, locations and other relevant information concerning subsidiaries, suppliers and business partners.Footnote 43 The due diligence strategy is to be integrated in the undertaking’s business strategy, particularly in the choice of commercial partners. The undertaking is to contractually bind its commercial partners to comply with the company’s due diligence strategy.
13.3.4 Litigation before National Courts
Civil society, NGOs and trade unions are key players in making accountable multinationals for their actions abroad and along the GVC. They have supported legal actions for human rights violations beyond national territories. Such an involvement of the civil society is considerably facilitated through digitalization, through the use of the platforms and through the greater transparency in GVCs.Footnote 44 Courts face cases where they have to assess violations of human rights in third countries by multinationals and their subsidiaries and construct extraterritorial responsibility. There is a considerable evolution from the 1980s in that the rights rhetoric goes beyond human rights so as to cover labour law, environmental law, misleading advertising or corporate law. Although the rights rhetoric recognizes the moral responsibility of private companies and accounts for their gravity, the challenges before and during trials to turn a moral responsibility into a legal liability are numerous.
In France, three textile NGOs brought a complaint arguing that Auchan’s communication strategy regarding its commitment to social and environmental standards in the supply chain constituted misleading advertising, since Auchan’s products were found in the Rana Plaza factory in Bangladesh, a factory well-known for its poor working and safety conditions. The case was dismissed at the stage of the investigation. In another case, Gabonese employees of COMILOG were victims of a train accident while at work, which led to financial difficulties for the company. They were dismissed and promised compensation, which they never received. With the support of NGOs, they brought the case to a French employment tribunal, claiming that COMILOG was owned by a French company. Their claim was dismissed but successful on appeal, where the court held COMILOG France and COMILOG international responsible for their own conduct and for the conduct of their subsidiaries abroad. On the merits, the court found that COMILOG had to compensate the workers. On appeal, the Court de Cassation annulled this finding, arguing that there was no sufficient evidence for the legally required strong link with the mother company in France.Footnote 45 There is a considerable number of cases with similar constellations, where courts struggle in finding a coherent approach to these legal issues.
In Total, the NGOs pretended that the vigilance plan is incomplete and does not offer appropriate mitigating measures or failing to adopt them. The court did not rule on the merits, as the competence lies with the commercial court, since the law on due diligence is part of the Commercial Code. Nevertheless, the court made a distinction between the formal notice procedure which is targeted at the vigilance plan and its implementation and the action in responsibility.Footnote 46 It is unclear whether the court suggested a twofold jurisdiction, a commercial one for due diligence strategies and another one for actions in responsibility. The case triggers fundamental questions as to what a satisfactory vigilance plan is and what appropriate mitigating measures are. It also requires clarifications about the relevant field of law applicable, the relevant procedure and the competent jurisdiction.
Even if there is an evolution as to the substance, today’s cases carry the heritage of those from the 1980s. Before ruling on the merits, courts engage in complex procedural issues, just like in the context of the Bhopal litigation or Kiobel. Such legal questions have not yet been settled at the national level, and they are still examined on a case-by-case basis. This lack of consistency renders the outcome of litigation uncertain. The first barrier is procedural; it concerns the jurisdiction of the national court on corporate action beyond the scope of its territorial jurisdiction. The second relates to the responsibility of the mother companies for their subsidiaries. In the two Shell cases brought up in the UKFootnote 47 and in the Netherlands,Footnote 48 Nigerian citizens had suffered from environmental damages which affected their territory, water, livelihood and health. Here the jurisdiction of the national courts was not an issue, but the differentiation between the mother company and its subsidiary remained controversial.Footnote 49
The tour d’horizon indicates how fragile the belief in judicial activism still is. The adoption of due diligence legislation has not changed the level playing field. Courts are to design the contours and requirements of due diligence. Two methodological questions are at the heart of the ongoing discussions of the private responsibilities of companies in the GVCs. Who is competent? Who is responsible? Such are the challenges of the multilevel internationalized and digitalized environment where law finds itself unequipped to address the relevant legal challenges.
13.3.5 Business Approaches to GVCs within and beyond the Law
Recent initiatives suggest a different approach, one where legal obligations are placed on companies, not only to comply with their own obligations but to make them responsible for the respect of due diligence strategies along the GVC. The role and function of Corporate Social Responsibility and Corporate Digital Responsibility are in the political limelight.Footnote 50 Thereby firms have the potential to exercise impact over the GVC. This is particularly true in case a lead company can easily be identified. If the upstream lead company decides to require its downstream partners to comply with its due diligence strategy, the lead company might be able to ensure compliance.Footnote 51 In GVCs, contracts are turned into a regulatory tool to the benefit of the lead company and perhaps to the benefit of public policy goals. There are two major problems: the first results from the exercise of economic power, which might be for good, but the opposite is also true. The second relates to the organization of the GVC, which more often than not is lacking a lead company but is composed out of a complex network of big, small and medium-sized companies. Designing responsibilities in networks is one of the yet still unsolved legal issues.
A consortium of French NGOs has drafted a report on the first year of application of the law on due diligence, where they have examined eighty vigilance plans published by French corporations falling under the scope of the due diligence law.Footnote 52 The report is entitled ‘Companies Must Do Better’ and sheds light on questions we have raised before. As regards the publication and content of the due diligence plans, not all companies have published their vigilance plans, some have incomplete ones, some have a lack of transparency and others seem to ignore the idea behind the due diligence plan. The report writes, ‘The majority of plans are still focusing on the risks for the company rather than those of third parties or the environment.’Footnote 53 Along the different criteria of the vigilance plan analysed by the consortium of NGOs, it becomes clear that few companies have developed methodologies and appropriate responses in designing their due diligence strategy, identifying and mitigating risks. It is also noted that companies have re-used some previous policies and collected them to constitute due diligence. The lack of seriousness does not only make the vigilance plans unreadable; it denies any due diligence strategy of the firm. If multinationals do not take legal obligations seriously at the level of the GVC leading company, are they likely to produce positive spillover effects along the chain? It is too early to condemn the regulatory approach and the French multinationals. Once similar obligations will be adopted in most countries, at least in the EU, we might see a generalization and good practices emerge. Over the long term, we might witness competition arise between firms on the ground of their due diligence strategy.
Externally from the GVC, compliance can also be carried out by actors such as Trade Unions and NGOs. They have long been active in litigation and were consulted in the process of designing legislation. The European Parliament’s Recommendation suggests their involvement in the establishment of the undertaking’s due diligence strategies, similar to French law.Footnote 54 Further, due diligence strategies are to be made public. In France, few companies have made public NGOs or stakeholders contributing to the design of the strategy. If there is no constructive cooperation between multinationals and NGOs yet, NGOs have access to grievance mechanisms under the European Parliament’s Recommendation, which resembles the letter of formal notice under the French law.Footnote 55 Stakeholders which are not limited to NGOs could thereby voice concerns as to the existence of risks which the undertakings would have to answer to and be transparent about through publication.
NGOs have a unique capacity for gathering information abroad on the ground. The European Parliament’s text explicitly refers to the National Contact Point under the OECD framework. National Contact Points are not only entrusted with the promotion of the OECD guidelines; they offer a non-judicial platform for grievance mechanisms.Footnote 56 The OECD conducts an in-depth analysis of the facts and publishes a statement as to the conflict and what it can offer to mediate it. Although such proceedings are non-binding, they do offer the possibility for an exchange between the parties and the case files are often relied on in front of courts. It seems that NGOs and other stakeholders have a role to play in compliance with the due diligence principles. They are given the possibility to penetrate the network and work with it from the inside. There are equally mechanisms that allow for external review of the GVC’s behaviour.
13.4 The Way Ahead: The Snake Bites Its Own Tail
The European Parliaments have discussed the introduction of an independent authority with investigative powers to oversee the application of the proposed directive – namely, the establishment of due diligence plans and appropriate responses in case of risks.Footnote 57 In EU jargon, this implies the creation of a regulatory agency or a form alike. Such an agency could take different forms and could have different powers; what is crucial is the role such an agency might play in the monitoring and surveillance of fundamental rights, the environment, labour rights, consumer rights and so on. A general cross-cutting approach would have a broader effect than isolated pieces of sector- or product-specific legislation. If such rights were as important as for instance competition law, the EU would turn into a leader in transmitting its values only to the GVCs at the international level. Playing and being the gentle civiliser does not mean that the EU does not behave like a hegemon, though.Footnote 58
Does the snake bite its own tail? Despite the idealistic compliance mechanisms, a return to courts seems inevitable, and fundamental questions remain. Are multinationals responsible for their actions abroad? Let us flip a coin. Heads, yes, there is legislation, or it is underway. There is political will and civic engagement. There is a strong rights rhetoric that people, politicians and multinationals relate to. Heads of multinationals and politicians have said this is important. Firms are adopting due diligence strategies; they are mitigating the risks of their activities. They are taking their responsibility seriously. Tails, all the above is true, there has been considerable progress and there is optimism. Does it work in practice? Some doubts arise. There are issues of compliance and courts struggle. Multinationals and nowadays GAFAs have communication strategies to send positive messages. They do not have mailboxes; it is sometimes difficult to find them. Mostly, they might even own GVCs, and what happens there stays there. It is upon their desire to commit to their duty of due diligence; it is not upon the state. How will these parties react in the algorithmic society?
14.1 Introduction
Ongoing digital transformation combined with artificial intelligence (AI) brings serious advantages to society.Footnote 1 Transactional opportunities knock: optimal energy use, fully autonomous machines, electronic banking, medical analysis, constant access to digital platforms. Society at large is embracing the latest wave of AI applications as being one of the most transformative forces of our time. Two developments contribute to the rise of the algorithmic society: (1) the possibilities resulting from technological advances in machine learning, and (2) the availability of data analysis using algorithms. Where the aim is to promote competitive data markets, the question arises of what benefits or harm can be brought to private individuals. Some are concerned about human dignity.Footnote 2 They believe that human dignity may be threatened by digital traders who demonstrate an insatiable hunger for data.Footnote 3 Through algorithms the traders may predict, anticipate and regulate future private individual, specifically consumer, behaviour. Data assembly forms part of reciprocal transactions, where these data are currency. With the deployment of AI, traders can exclude uncertainty from the automated transaction processes.
The equality gap in the employment of technology to automated transactions begs the question of whether the private individual’s fundamental rights are warranted adequately.Footnote 4 Prima facie, the consumer stands weak when she is subjected to automatic processes – no matter if it concerns day-to-day transactions, like boarding a train, or a complex decision tree used to validate a virtual mortgage. When ‘computer says no’ the consumer is left with limited options: click yes to transact (and, even then, she could fail), abort or restart the transaction process, or – much more difficult – obtain information or engage in renegotiations. But, where the negotiations process is almost fully automated and there is no human counterpart, the third option is circular rather than complementary to the first two. Empirical evidence suggests that automated decisions will be acceptable to humans only, if they are confident the used technology and the output is fair, trustworthy and corrigible.Footnote 5 How should Constitutional States respond to new technologies on multisided platforms that potentially shift the bargaining power to the traders?
A proposed definition of digital platforms is that these are companies (1) operating in two or multisided markets, where at least one side is open to the public; (2) whose services are accessed via the Internet (i.e., at a distance); and (3) that, as a consequence, enjoy particular types of powerful network effects.Footnote 6 With the use of AI, these platforms may create interdependence of demand between the different sides of the market. Interdependence may create indirect network externalities. This leads to establishing whether and, if so, how traders can deploy AI to attract one group of customers to attract the other, and to keep both groups thriving on the digital marketplace.
AI is a collection of technologies that combine data, algorithms and computing power. Yet science is unable to agree even on a single definition of the notion ‘intelligence’ as such. AI often is not defined either. Rather, its purpose is described. A starting point to understand algorithms is to see them as virtual agents. Agents learn, adapt and even deploy themselves in dynamic and uncertain virtual environments. Such learning is apt to create a static and reliable environment of automated transactions. AI seems to entail the replication of human behaviour, through data analysis that models ‘some aspect of the world’. But does it? AI employs data analysis models to map behavioural aspects of humans.Footnote 7 Inferences from these models are used to predict and anticipate possible future events.Footnote 8 The difference in applying AI rather than standard methods of data analysis is that AI does not analyse data as they were programmed initially. Rather, AI assembles data, learns from them to respond intelligently to new data, and adapt the output in accordance therewith. Thus AI is not ideal for linear analysis of data in the manner they have been processed or programmed. Conversely, algorithms are more dynamic, since they apply machine learning.Footnote 9
Machine learning algorithms build a mathematical model based on sample data, known as ‘training data’.Footnote 10 Training data serve computer systems to make predictions or decisions, without being programmed specifically to perform the task. Machine learning focuses on prediction-based unknown properties learned from the training data. Conversely, data analysis focuses on the discovery of (previously) unknown properties in the data. The analytics process enables the processor to mine data for new insights and to find correlations between apparently disparate data sets through self-learning. Self-learning AI can be supervised or unsupervised. Supervised learning is based on algorithms that build and rely on labelled data sets. The algorithms are ‘trained’ to map from input to output, by the provision of data with ‘correct’ values already assigned to them. The first training phase creates models on which predictions can then be made in the second ‘prediction’ phase.Footnote 11 Unsupervised learning entails that the algorithms are ‘left to themselves’ to find regularities in input data without any instructions on what to look for.Footnote 12 It is the ability of the algorithms to change their output based on experience that gives machine learning its power.
For humans, it is practically impossible to deduct and contest in an adequate manner the veracity of a machine learning process and the subsequent outcome based thereon. This chapter contends that the deployment of AI on digital platforms could lead to potentially harmful situations for consumers given the circularity of algorithms and data. Policy makers struggle with formulating answers. In Europe, the focus has been on establishing that AI systems should be transparent, traceable and guarantee human oversight.Footnote 13 These principles form the basis of this chapter. Traceability of AI could contribute to another requirement for AI in the algorithmic society: veracity, or truthfulness of data.Footnote 14 Veracity and truthfulness of data are subject to the self-learning AI output.Footnote 15 In accepting the veracity of the data, humans require trust. Transparency is key to establishing trust. However, many algorithms are non-transparent and thus incapable of explanation to humans. Even if transparent algorithms would be capable of explanation to humans, then still the most effective machine learning process would defy human understanding. Hence the search for transparent algorithms is unlikely to provide insights into the underlying technology.Footnote 16 The quality of output using non-transparent AI is probably better, but it makes the position of the recipient worse, because there is no way for her to test the processes. Consequently, the Constitutional States may want to contain the potential harms of these technologies by applying private law principles.
This chapter’s principal research question is how Constitutional States should deal with new forms of private power in the algorithmic society. In particular, the theorem is that regulatory private law can be revamped in the consumer rights’ realm to serve as a tool to regulate AI and the possible adverse consequences for the weaker party on digital platforms. Rather than the top-down regulation of AI’s consequences to protect human dignity, this chapter proposes considering a bottom-up approach of empowering consumers in the negotiations and the governance phases of mutual digital platform transactions. Following the main question, it must be seen how consumer rights can be applied to AI in a meaningful and effective manner. Could AI output be governed better if the trader must comply with certain consumer law principles such as contestability, traceability, veracity, and transparency?
One initial objection may query why we limit this chapter to consumer law. The answer is that consumers are affected directly when there is no room to negotiate or contest a transaction. Consumer rights are fundamental rights.Footnote 17 The Charter of Fundamental Rights of the EU (CFREU) dictates that the Union’s policies ‘shall ensure a high level of consumer protection’.Footnote 18 The high level of consumer protection is sustained by ensuring, inter alia, the consumers’ economic interests in the Treaty on the Functioning of the European Union (TFEU).Footnote 19 The TFEU stipulates that the Union must promote consumers’ rights to information. The TFEU stipulates that the Union must contribute to the attainment of a high-level baseline of consumer protection that also takes into account technological advances.Footnote 20 It is evident that in the algorithmic society, the EU will strive to control technologies if these potentially cause harm to the foundations of European private law. Responding adequately to the impact that AI deployment may have on private law norms and principles, a technology and private law approach to AI could, conversely, enforce European private law.Footnote 21 Although AI is a global phenomenon, it is challenging to formulate a transnational law approach, given the lack of global AI and consumer regulation.
The structure is as follows: Section 14.2 sets the stage: AI on digital platforms is discussed bottom-up in the context of EU personal data and internal market regulation, in particular revamped consumer law, online intermediaryFootnote 22 and free-flow of data regulation. The focus is on contributing to the ongoing governance debate of how to secure a high level of consumer protection when AI impacts consumer transactions on digital platforms, along with what rights consumers should have if they want to contest or reject AI output. Section 14.2.1 explores why consumer law must supplement AI regulation to warrant effective redress. Section 14.2.2 alludes to principles of contract law. Section 14.2.3 juxtaposes consumer rights with the data strategy objectives. Section 14.2.4 discusses trustworthiness and transparency. Section 14.3 is designed to align consumer rights with AI. Section 14.3.1 reflects on the regulation of AI and consumer rights through GTC. Section 14.3.2 presents consumer law principles that could be regulated: contestability (Section 14.3.2.1), traceability and veracity (Section 14.3.2.2) and transparency (Section 14.3.2.3). Section 14.3.3 considers further harmonization of consumer law in the context of AI. Section 14.4 contains closing remarks and some recommendations.
14.2 AI on Digital Platforms
14.2.1 Consumers, Data Subjects and Redress
Consumers may think they are protected against adverse consequences of AI under privacy regulations and personal data protection regulatory regimes. However, it remains to be seen whether personal data protection extends to AI. Privacy policies are not designed to protect consumers against adverse consequences of data generated through AI. In that sense, there is a significant conceptual difference between policies and GTC: privacy policies are unilateral statements for compliance purposes. The policies do not leave room for negotiation. Moreover, privacy policies contain fairly moot purpose limitations. The purpose limitations are formulated de facto as processing rights. The private consumers/data subjects consider their consent implied to data processing, whatever tech is employed. Hence, the traders might be apt to apply their policies to consumers who are subjected to AI and machine learning. The General Data Protection Regulation (GDPR) contains one qualification in the realm of AI:Footnote 23 a data subject has the right to object at any time against ADM including profiling. This obligation for data controllers is set off by the provision that controllers may employ ADM, provided they demonstrate compelling legitimate grounds for the processing which override the interests, rights and freedoms of the data subject.
Most of the traders’ machine learning is fed by aggregated, large batches of pseudonymised or anonymised non-personal data.Footnote 24 There is no built-in yes/no button to express consent to be subjected to AI, and there is no such regulation on the horizon.Footnote 25 The data policies are less tailored than GTC to defining consumer rights for complex AI systems. Besides, it is likely that most private individuals do not read the digital privacy policies – nor the general contract terms and conditions (GTC) for that matter – prior to responding to AI output.Footnote 26 The provided questions reveal important private law concerns: ‘What are my rights?’ relates to justified questions as regards access rights and vested consumer rights, the right to take note of and save/print the conditions; void unfair user terms; and termination rights. Traders usually refer to the GTC that can be found on the site. There is no meaningful choice. That is even more the case in the continental tradition, where acceptance of GTC is explicit. In Anglo-American jurisdictions, the private individual is confronted with a pop-up window which must be scrolled through and accepted. Declining means aborting the transaction.
‘How can I enforce my rights against the trader?’ requires that the consumer who wishes to enforce her rights must be able to address the trader, either on the platform or through online dispute resolution mechanisms. Voidance or nullification are remedies when an agreement came about through settled European private law principles, such as coercion, error or deceit. Hence the consumer needs to know there is a remedy if the AI process contained errors or was faulty.Footnote 27
14.2.2 Principles of Contract Law
In the algorithmic society, consumers still should have at least some recourse to a counterparty, whom they can ask for information during the consideration process. They must have redress when they do not understand or agree with transactional output that affects their contractual position without explanation. The right to correct steps in contract formation is moot, where the process is cast in stone. Once the consumers have succeeded in identifying the formal counterparty, they can apply remedies. Where does that leave them if the response to these remedies is also automated as a result of the trader’s use of profiling and decision-making tools? This reiterates the question of whether human dignity is at stake, when the counterpart is not a human but a machine. The consumer becomes a string of codes and loses her feeling of uniqueness.Footnote 28 Furthermore, when distributed ledger technology is used, the chain of contracts is extended. There is the possibility that an earlier contractual link will be ‘lost’. For example, there is a gap in the formation on the digital platform, because the contract formation requirements either were not fully met or were waived. Another example is where the consumer wants to partially rescind the transaction but the system does not cater for a partial breach. The impact of a broken upstream contractual link on a downstream contract in an AI-enabled transactional system is likely to raise novel contract law questions, too. An agreement may lack contractual force if there is uncertainty or if a downstream contractual link in the chain is dependent on the performance of anterior upstream agreements. An almost limitless range of possibilities will need to be addressed in software terms, in order to execute the platform transaction validly. When the formation steps are using automated decision-making processes that are not covered in the GTC governing the status of AI output, then this begs the question of how AI using distributed ledger technology could react to non-standard events or conditions, and if and how the chain of transactions is part of the consideration. The consumer could wind up in a vicious cycle, and her fundamental rights of a high consumer protection level could be at stake, more than was the case in the information society. Whereas e-Commerce, Distant Selling and, later, Services Directives imposed information duties on traders, the normative framework for the algorithmic society is based on rather different principles. Theories such as freedom of contract – which entails the exclusion of coercion – and error, when AI output contains flaws or defects may be unenforceable in practice. For the consumer to invoke lack of will theories, she needs to be able to establish where and how in the system the flaws or mistakes occurred.
14.2.3 Data Strategy
Does the data strategy stand in the way of consumer protection against AI? The focus of the EU’s data strategy is on stimulating the potential of data for business, research and innovation purposes.Footnote 29 The old regulatory dilemma on how to balance a fair and competitive business environment with a high level of consumer rights is revived. In 2019–2020, the Commission announced various initiatives, including rules on (1) securing free flow of data within the Union,Footnote 30 (2) provisions on data access and transfer,Footnote 31 and (3) and enhanced data portability.Footnote 32 Prima facie, these topics exhibit different approaches to achieve a balance between business and consumer interests. More importantly, how does the political desire for trustworthy technology match with such diverse regulations? The answer is that it does not. The Free Flow of Non-Personal Data Regulation lays down data localization requirements, the availability of data to competent authorities and data porting for professional users.Footnote 33 It does not cover AI use. The Modernization of Consumer Protection Directive alludes to the requirement for traders to inform consumers about the default main parameters determining the ranking of offers presented to the consumer as a result of the search query and their relative importance as opposed to other parameters only.Footnote 34 The proviso contains a reference to ‘processes, specific signals incorporated into algorithms or other adjustment or demotion mechanisms used in connection with the ranking are not required to disclose the detailed functioning of their ranking mechanisms, including algorithms’.Footnote 35 It does not appear that the Modernization of Consumer Protection Directive is going to protect consumers against adverse consequences of AI output. It also seems that the Trade Secrets Directive stands somewhat in the way of algorithmic transparency.
The provisions on data porting revert to information duties. Codes of Conduct must detail the information on data porting conditions (including technical and operational requirements) that traders should make available to their private individuals in a sufficiently detailed, clear and transparent manner before a contract is concluded.Footnote 36 In light of the limited scope of data portability regulation, there can be some doubt as to whether the high-level European data strategy is going to contribute to a human-centric development of AI.
14.2.4 Trustworthiness and Transparency
The next question is what regulatory requirements could emerge when AI will become ubiquitous in mutual transactions.Footnote 37 The Ethical Guidelines on AI in 2019 allude to seven key requirements for ‘Trustworthy AI’: (1) human agency and oversight; (2) technical robustness and safety; (3) privacy and data governance; (4) transparency, (5) diversity, non-discrimination and fairness; (6) environmental and societal well-being; and (7) accountability.Footnote 38 These non-binding guidelines address different topics, some of which fall outside the scope of private law principles. In this chapter, the focus is on transparency, accountability and other norms, notably traceability, contestability and veracity.Footnote 39 These notions are covered in the following discussion. First, it is established that opaqueness on technology use and lack of accountability could be perceived as being potentially harmful to consumers.Footnote 40 There are voices that claim that technology trustworthiness is essential for citizens and businesses that interact.Footnote 41 Is it up to Constitutional States to warrant and monitor technology trustworthiness, or should this be left to businesses? Does warranting technology trustworthiness not revive complex economic questions, such as how to deal with the possibility of adverse impact on competition or the stifling of innovation, when governments impose standardized technology norms to achieve a common level of technology trustworthiness – in the EU only? What if trust in AI is broken?
A possible denominator for trustworthiness may be transparency. Transparency is a key principle in different areas of EU law. A brief exploration of existing regulation reveals different tools to regulate transparency. Recent examples in 2019–2020 range from the Modernization of Consumer Protection Directive to the Online Intermediary Services Regulation, the Ethical Guidelines on AI, the Open Data Directive and the 2020 White Paper on Artificial Intelligence.Footnote 42 All these instruments at least allude to the need for transparency in the algorithmic society. The Modernization of Consumer Protection Directive provides that more transparency requirements should be introduced. Would it be necessary to redefine transparency as a principle of private law in the algorithmic society? One could take this a step further: to achieve technology trustworthiness, should there be more focus on regulating transparency of AI and machine learning?Footnote 43 The Ethics Guidelines 2019 point at permission systems, fairness and explicability. From a private law perspective, especially permission systems could be considered to establish and safeguard trust. But reference is also made to the factual problem that consumers often do not take note of the provisions that drive the permission.
Explicability is not enshrined as a guiding principle. Nevertheless, transparency notions could be a stepping stone to obtaining explicability.Footnote 44 Accuracy may be a given. What matters is whether the consumer has the right and is enabled to contest an outcome that is presented as accurate.
14.3 Consumer Rights, AI and ADM
14.3.1 Regulating AI through General Terms and Conditions
There are two aspects regarding GTC that must be considered. First, contrary to permission systems, the general rule in private law remains that explicit acceptance of GTC by the consumer is not required, as long as the trader has made the terms available prior to or at the moment the contract is concluded. Contrary to jurisdictions that require parties to scroll through the terms, the European approach of accepting implied acceptance in practice leads to consumers’ passiveness. Indeed, the system of implicit permission encourages consumers to not read GTC. Traders on digital platforms need to provide information on what technologies they use and how they are applied. Given the sheer importance of fundamental rights of human dignity and consumer rights when AI is applied, the question is whether consumers should be asked for explicit consent when the trader applies AI. It would be very simple for traders to implement consent buttons applying varied decision trees. But what is the use when humans must click through to complete a transaction? Take, for example, the system for obtaining cookies consent on digital platforms.Footnote 45 On the one hand, the traders (must) provide transparency on which technologies they employ. On the other hand, cookie walls prevent the consumer from making an informed decision, as they are coerced to accept the cookies. A recognizable issue with cookies in comparison with AI is that, often, it is the consumers who are unable to understand what the different technologies could mean for them personally. In the event the AI output matches their expectations or requirements, consumers are unlikely to protest prior consent given. Hence the real question is whether consumers should be offered a menu of choice beforehand, plus an option to accept or reject AI output or ADM. This example will be covered in the following discussion.
Second, where there is no negotiation or modification of the GTC, the consumer still will be protected by her right to either void or rescind black-, blue or grey-list contract provisions. Additionally, the EU Unfair Contract Terms Directive contains a blue list with voidable terms and conditions.Footnote 46 However, the black, grey and blue lists do not count for much. Rather, the GTC should contain clauses that oblige the trader to observe norms and principles such as traceability, contestability, transparency and veracity of the AI process. This begs the question of whether ethics guidelines and new principles could be translated into binding, positively formulated obligations or AI use. Rather than unilateral statements on data use, GTC could be subjected to comply with general principles and obligations.
The key for prospective regulation does not lie in art. 6 (1) Modernization of Consumer Protection Directive. Although this clause contains no less than twenty-one provisions on information requirements, including two new requirements on technical aspects, none of the requirements apply to providing the consumer information on the use of AI and ADM, let alone the contestability of the consumer transaction based thereon. Granted, there is an obligation for the trader to provide information on the scope of the services, but not on the specific use of AI technology. It is a very big step from the general information requirements to providing specific information on the application of AI and ADM in mutual transactions. When a consumer is subjected to AI processes, she should be advised in advance, not informed after the fact. A commentary to art. 6 clarifies that the traders must provide the information mentioned therein prior to the consumer accepting the contract terms (GTC).Footnote 47 The underlying thought is not new – to protect consumers, as weaker contractual parties, from concluding contracts that may be detrimental to them, and as a result of not having all the necessary information. Absent any relevant information, the consumer lags behind, especially in terms of not being informed adequately (1) that, (2) how and (3) for which purposes AI and machine learning is applied by the trader. The commentators generally feel that providing consumers with the relevant information prior to the conclusion of the contract is essential. Knowing that the trader uses such technologies could be of utmost importance to the consumer. Even if she cannot oversee what the technological possibilities are, she should still get advance notice of the application of AI. Advance notice means a stand-still period during which she can make an informed decision. Going back to the cookie policy example, it is not onerous on the trader to offer the consumer a menu for choice beforehand. This would be especially relevant for the most used application of AI and ADM: profiling. The consumer should have the right to reject a profile scan that contains parameters she does not find relevant or which she perceives as being onerous on her. Granted, the trader will warn the consumer that she will not benefit from the best outcome, but that should be her decision. The consumer should have a say in this important and unpredictable process. She should be entitled to anticipating adverse consequences of AI for her.
The consumer must be able to trace and contest the AI output and ADM. The justification for such rights is discrimination, and lack of information on the essentials underlying the contract terms that come about through the private law principle of offer and acceptance. Granted, art. 9 Modernization of Consumer Protection Directive contains the generic right of withdrawal.Footnote 48 Contesting a consumer transaction based on AI is not necessary. The consumer can simply fill in a form to rescind the agreement. Regardless, the point of a consumer approach to AI use is not meant for the consumer to walk away. The consumer must have the right to know what procedures were used, what kind of outcome they produced, what is meant for the transaction and what she can do against it. As said, the consumer also must have a form of redress, not just against the trader but also against the developer of the AI software, the creator of the process, the third-party instructing the algorithms and/or the intermediary or supplier of the trader.
14.3.2 Consumer Law Principles
Which consumer law principles could be reignited in GTC that enable consumers to require the traders to be accountable for unfair processes or non-transparent output? This goes back to the main theorem. Transactions on digital platforms are governed by mutually agreed contract terms. It is still common practice that these are contained in GTC. Is there a regulatory gap that requires for Constitutional States to formulate new or bend existing conditions for traders using AI? The Bureau Européen des Unions de ConsommateursFootnote 49 proposes ‘a set of transparency obligations to make sure consumers are informed when using AI-based products and services, particularly about the functioning of the algorithms involved and rights to object automated decisions’. The Modernization of Consumer Protection Directive is open for adjustment of consumer rights ‘in the context of continuous development of digital tools’. The Directive makes a clear-cut case for consumers catering for the adverse consequences of AI.Footnote 50 But it contains little concrete wording on AI use and consumers.Footnote 51 Embedding legal obligations for the trader in GTC could, potentially, be a very effective measure. There is one caveat, in that GTC often contain negatively formulated obligations.Footnote 52 Positively phrased obligations, such as the obligation to inform consumers that the trader employs AI, require further conceptual thinking. Another positively phrased obligation could be for the traders to explain the AI process and explain and justify the AI output.
14.3.2.1 Contestability
How unfair is it when consumers may be subject to decisions that are cast in stone (i.e., non-contestable)? An example is embedded contestability steps in smart consumer contracts. At their core, smart contracts are self-executing arrangements that the computer can make, verify, execute and enforce automatically under event-driven conditions set in advance. From an AI perspective, an almost limitless range of possibilities must be addressed in software terms. It is unlikely that these possibilities can be revealed step-by-step to the consumer. Consumers probably are unaware of the means of redress against AI output used in consumer transactions.Footnote 53 Applying a notion of contestability – not against the transaction but against the applied profiling methods or AI output – is no fad. If the system enables the consumer to test the correctness of the AI technology process and output, there must be a possibility of reconsidering the scope of the transaction. Otherwise, the sole remedy for the consumer could be a re-test of the AI process, which is a fake resolve. Indeed, the possibility of technological error or fraud underlines that a re-test is not enough. Traditional contract law remedies, such as termination for cause, could be explored. Furthermore, in connection with the information requirements, it would make sense to oblige traders to grant the consumer a single point of contact. This facilitates contesting the outcome with the trader or a third party, even if the automated processes are not monitored by the trader.Footnote 54
14.3.2.2 Traceability, Veracity
Testing veracity requires reproducibility of the non-transparent machine learning process. Does a consumer have a justified interest in tracing the process steps of machine learning, whether or not this has led to undesirable AI output? Something tells a lawyer that – no matter the output – as long as the AI output has an adverse impact on the consumer, it seems reasonable that the trader will have the burden of evidence that output is correct and, that, in order to be able to provide a meaningful correction request, the consumer should be provided with a minimum of necessary technical information that was used in the AI process. Traceability is closely connected with the requirement of accessibility to information, enshrined in the various legal instruments for digital platform regulation. As such, traceability is closely tied with the transparency norm.
It is likely that a trader using AI in a consumer transaction will escape from the onus on proving that the machine learning process, the AI output or the ADM is faulty. For the average consumer, it will be very difficult to provide evidence against the veracity of – both non-transparent and transparent – AI. The consumer is not the AI expert. The process of data analysis and machine learning does not rest in her hands. Besides, the trail of algorithmic decision steps probably is impossible to reconstruct. Hence, the consumer starts from a weaker position than the trader who applies AI. Granted, it was mentioned in Section 14.2.2 that it makes no practical sense for the consumer to ask for algorithmic transparency, should the consumer not agree with the output. The point is that at least the consumer should be given a chance to trace the process. Traceability – with the help of a third party who is able to audit the software trail – should be a requirement on the trader and a fundamental right for the consumer.
14.3.2.3 Transparency
Transparency is intended to solve information asymmetries with the consumer in the AI process. Transparency is tied closely with the information requirements laid down in the digital platforms and dating back to the Electronic Commerce Directive.Footnote 55 What is the consequence when information requirements are delisted because they have become technologically obsolete? Advocate General Pitruzzella proposed that the Court rule that an e-commerce platform such as Amazon could no longer be obliged to make a fax line available to consumers.Footnote 56 He also suggested that digital platforms must guarantee the choice of several different means of communication available for consumers and rapid contact and efficient communication.Footnote 57 By analogy, in the algorithmic society, transparency obligations on AI-driven platforms could prove to be a palpable solution for consumers. Providing transparency on the output also contributes to the consumer exercising some control over data use in the AI process, notwithstanding the argument that transparent algorithms cannot be explained to a private individual.
14.3.3 Further Harmonization of Consumer Law in the Context of AI
It should be considered whether the Unfair Commercial Practices Directive could be updated with terms that regulate AI.Footnote 58 At the high level, this Directive introduced the notion of ‘good faith’ to prevent imbalances in the rights and obligations of consumers on the one hand and sellers and suppliers on the other hand.Footnote 59 It should be borne in mind that consumer protection will become an even more important factor when the chain of consumer agreements with a trader becomes extended. Granted, the question of whether and how to apply AI requires further thinking on what types of AI and data use could constitute unfair contract terms. A case could be made of an earlier agreement voiding follow-up transactions, for example, because the initial contract formation requirements were not met as after AI deployment. But the impact of a voidable upstream contractual link on a downstream agreement in an AI-enabled or contract system is likely to raise different novel contract law questions, for instance, regarding third party liability.
In order to ensure that Member State authorities can impose effective, proportionate and dissuasive penalties in relation to widespread infringements of consumer law and to widespread infringements with an EU dimension that are subject to coordinated investigation and enforcement,Footnote 60 special fines could be introduced for the unfair application of AI.Footnote 61 Contractual remedies, including claims as a result of damages suffered from incorrect ADM, could be considered.
Prima facie, the Modernization of Consumer Protection Directive provides for the inclusion of transparency norms related to the parameters of ranking of prices and persons on digital platforms. However, the Directive does not contain an obligation to inform the consumer about the relative importance of ranking parameters and the reasons why and through what human process, if any, the input criteria were determined. This approach bodes well for the data strategy, but consumers could end up unhappy, for instance, if information about the underlying algorithms is not included in the transparency standard.
By way of an example, the Modernization of Consumer Protection Directive provides for a modest price transparency obligation at the retail level. It proposes a specific information requirement to inform consumers clearly when the price of a product or service presented to them is personalized on the basis of ADM. The purpose of this clause is to ensure that consumers can take into account the potential price risks in their purchasing decision.Footnote 62 But the proviso does not go as far as to determine how the consumer should identify these risks. Digital platforms are notoriously silent on price comparisons. Lacking guidance on risk identification results in a limited practical application of pricing transparency. What does not really help is that the Modernization of Consumer Protection Directive provides traders with a legal – if flimsy – basis for profiling and ADM.Footnote 63 This legal basis is, unfortunately, not supplemented by consumer rights that go beyond them receiving certain, non-specific information from the trader. The Modernization of Consumer Protection Directive, as it stands now, does not pass the test of a satisfactorily high threshold for consumer protection on AI-driven platforms.
14.4 Closing Remarks
This chapter makes a case for a bottom-up approach to AI use in consumer transactions. The theorem was that the use of AI could well clash with the fundamental right of a high level of consumer protection. Looking at principles of contract law, there could be a regulatory gap when traders fail to be transparent on why and how they employ AI. Consumers also require a better understanding of AI processes and consequences of output, and should be allowed to contest the AI output.
Regulators alike could look at enhancing GTC provisions, to the extent that the individual does not bear the onus of evidence when contesting AI output. Consumers should have the right to ask for correction, modification and deletion of output directly from the traders. It should be borne in mind that the individual is contesting the way the output was produced, generated and used. The argument was made also that consumer rights could supplement the very limited personal data rights on AI.
When Constitutional States determine what requirements could be included in GTC by the trader, they could consider a list of the transparency principles. The list could include (1) informing the consumer prior to any contract being entered into that it is using AI; (2) clarifying for what purposes AI is used; (3) providing the consumer with information on the technology used; (4) granting the consumer a meaningful, tailored and easy to use number of options in accepting or rejecting the use of AI and/or ADM, before it engages in such practice; (5) informing the consumer beforehand of possible adverse consequences for her if she refuses to submit to the AI; (6) how to require from the trader a rerun on contested AI output; (7) adhering to an industry-approved code of conduct on AI and making this code easily accessible for the consumer; (8) informing the consumer that online dispute resolution extends to contesting AI output and/or ADM; (9) informing the consumer that her rights under the GTC are without prejudice to other rights such under personal data regulation; (10) enabling the consumer – with one or more buttons – to say yes or no to any AI output, and giving her alternative choices; (11) enabling the consumer to contest the AI output or ADM outcome; (12) accepting liability for incorrect, discriminatory and wrongful output; (13) warranting the traceability of the technological processes used and allowing for an audit at reasonable cost and (14) explaining the obligations related to how consumer contracts are shared with a third party performing the AI process. These suggestions require being entitled to have a human, independent third party to monitor AI output, and the onus of evidence regarding the veracity of the output should be on the trader.
The fact that AI is aimed at casting algorithmic processes in stone to facilitate mutual transactions on digital platforms should not give traders a carte blanche, when society perceives a regulatory gap.
15.1 Introduction
Our lives are increasingly inhabited by technological tools that help us with delivering our workload, connecting with our families and relatives, as well as enjoying leisure activities. Credit cards, smartphones, trains, and so on are all tools that we use every day without noticing that each of them may work only through their internal ‘code’. Those objects embed software programmes, and each software is based on a set of algorithms. Thus we may affirm that most of (if not all) our experiences are filtered by algorithms each time we use such ‘coded objects’.Footnote 1
15.1.1 A Preliminary Distinction: Algorithms and Soft Computing
According to computer science, algorithms are automated decision-making processes to be followed in calculations or other problem-solving operations, especially by a computer.Footnote 2 Thus an algorithm is a detailed and numerically finite series of instructions which can be processed through a combination of software and hardware tools: Algorithms start from an initial input and reach a prescribed output, which is based on the subsequent set of commands that can involve several activities, such as calculation, data processing, and automated reasoning. The achievement of the solution depends upon the correct execution of the instructions.Footnote 3 However, it is important to note that, contrary to the common perception, algorithms are neither always efficient nor always effective.
Under the efficiency perspective, algorithms must be able to execute the instructions without exploiting an excessive amount of time and space. Although technological progress allowed for the development of increasingly more powerful computers, provided with more processors and a better memory ability, when algorithms execute instructions that produce great numbers which exceed the space available in memory of a computer, the ability of the algorithm itself to sort the problems is questioned.
As a consequence, under the effectiveness perspective, algorithms may not always reach the exact solution or the best possible solution, as they may include a level of approximation which may range from a second-best solution,Footnote 4 to a very low level of accuracy. In this case, computer scientists use the definition of ‘soft computing’ (i.e., the use of algorithms that are tolerant of imprecision, uncertainty, partial truth, and approximation), due to the fact that the problems that they are addressing may not be solved or may be solved only through an excessive time-consuming process.Footnote 5
Accordingly, the use of these types of algorithms involves the possibility to provide solutions to hard problems, though these solutions, depending on the type of problems, may not always be the optimal ones. Given the ubiquitous use of algorithms processing our data and consequently affecting our personal decisions, it is important to understand in which occasions we may (or should) not fully trust the algorithm and add a human in the loop.Footnote 6
15.1.2 The Power of Algorithms
According to Neyland,Footnote 7 we may distinguish between two types of power: one exercised by algorithms, and one exercised across algorithms. The first one is the traditional one, based on the ability of algorithms to influence and steer particular effects. The second one is based on the fact that ‘algorithms are caught up within a set of relations through which power is exercised’.Footnote 8 In this sense, it is possible to affirm the groups of individuals that at different stages play a role in the definition of the algorithm share a portion of power.
In practice, one may distinguish between two levels of analysis. Under the first one, for instance when we digit a query over a search engine, the search algorithm activates and identifies the best results related to the keywords inserted, providing a ranked list of results. These results are based on a set of variables that are dependent on the context of the keywords, but also on the trust of the source,Footnote 9 on the previous history of searches of the individual, and so forth. The list of results available will then steer the decisions of the individual and affect his/her interpretation of the information searched for. Such power should not be underestimated, because the algorithm has the power to restrict the options available (i.e., avoiding some content because evaluated as untruthful or irrelevant) or to make it more likely to select a specific option. If this can be qualified as the added value of algorithms able to improve the flaws of human reasoning, which include myopia, framing, loss aversion, and overconfidence,Footnote 10 then it also shows the power of the algorithm over individual decision-making.Footnote 11
Under the second level of analysis, one may widen the view taking into account the criteria that are used to identify the search results, the online information that is indexed, the computer scientist that set those variables, the company that distributes the algorithm, the public or private company that uses the algorithm, and the individuals that may steer the selection of content. All these elements have intertwining relationships that show a more distributed allocation of power – and, as a consequence, a subsequent quest for a shared type of accountability and liability systems.
15.1.3 The Use of Algorithms in Content Moderation
In this chapter, the analysis will focus on those algorithms that are used for content detection and control over user-generated platforms, the so-called content moderation. Big Internet companies have always used filtering algorithms to detect and classify the enormous quantity of uploaded data daily. Automated content filtering is not a new concept on the Internet. Since the first years of Internet development, many tools have been deployed to analyse and filter content, and among them the most common and known are those adopted for spam detection or hash matching. For instance, spam detection tools identify content received in one’s email address, distinguishing between clean emails and unwanted content on the basis of certain sharply defined criteria derived from previously observed keywords, patterns, or metadata.Footnote 12
Nowadays, algorithms that are used for content moderation are widely diffuse, having the advantage of scalability. Such systems promise to make the process much easier, quicker, and cheaper than would be the case when using human labour.Footnote 13
For instance, the LinkedIn network published the update of the algorithms used to select the best matches between employers and potential employees.Footnote 14 The first steps of the content moderation are worth describing: at the first step, the algorithms check and verify the compliance of the content published with the platform rules (leading to a potential downgrade of the visibility or complete ban in case of incompliance). Then, the algorithms evaluate the interactions that were triggered by the content posted (such as sharing, commenting, or reporting by other users). Finally, the algorithms weigh such interactions, deciding whether the post will be demoted for low quality (low interaction level) or disseminated further for its high quality.Footnote 15
As the example of the LinkedIn algorithm clearly shows, the effectiveness of the algorithm depends on its ability to accurately analyse and classify content in its context and potential interactions. The capability to parse the meaning of a text is highly relevant for making important distinctions in ambiguous cases (e.g., when differentiating between contemptuous speech and irony).
For this task, the industry has now increasingly turned to machine learning to train their programmes to become more context sensitive. Although there are high expectations regarding the ability of content moderation tools, one should not underestimate the risks of overbroad censorship,Footnote 16 violation of the freedom of speech principle, as well as biased decision-making against minorities and non-English speakers.Footnote 17 The risks are even more problematic in the case of hate speech, an area where the recent interventions of European institutions are pushing for more human and technological investments of IT companies, as detailed in the next section.
15.2 The Fight against Hate Speech Online
Hate speech is not a new phenomenon. Digital communication may be qualified only as a new arena for its dissemination. The features of social media pave the way to a wider reach of harmful content. ‘Sharing’ and ‘liking’ lead to a snowball effect, which allows the content to have a ‘quick and global spread at no extra cost for the source’.Footnote 18 Moreover, users see in the pseudonymity allowed by social media an opportunity to share harmful content without bearing any consequence.Footnote 19 In recent years, there has been a significant increase in the availability of hate speech in the form of xenophobic, nationalist, Islamophobic, racist, and anti-Semitic content in online communication.Footnote 20 Thus the dissemination of hate speech online is perceived as a social emergency that may lead to individual, political, and social consequences.Footnote 21
15.2.1 A Definition of Hate Speech
Hate speech is generally defined as speech ‘designed to promote hatred on the basis of race, religion, ethnicity, national origin’ or other specific group characteristics.Footnote 22 Although several international treaties and agreements do include hate speech regulation,Footnote 23 at the European level, such an agreed-upon framework is still lacking. The point of reference available until now is the Council Framework Decision 2008/913/JHA on Combatting Certain Forms and Expressions of Racism and Xenophobia by Means of Criminal Law.Footnote 24 As emerges from the title, the focus of the decision is the approximation of Member States’ laws regarding certain offences involving xenophobia and racism, whereas it does not include any references to other types of motivation, such as gender or sexual orientation.
The Framework Decision 2008/913/JHA should have been implemented by Member States by November 2010. However, the implementation was less effective than expected: not all the Member States have adapted their legal framework to the European provisions.Footnote 25 Moreover, in the countries where the implementation occurred, the legislative intervention followed different approaches than the national approaches to hate speech, either through the inclusion of the offence within the criminal code or through the adoption of special legislation on the issue. The choice is not without effects, as the procedural provisions applicable to special legislation may be different to those applicable to offences included in the criminal code.
Given the limited effect of the hard law approach, the EU institutions moved to a soft law approach regarding hate speech (and, more generally, also illegal content).Footnote 26 Namely, EU institutions moved toward the use of forms of co-regulation where the Commission negotiates a set of rules with the private companies, under the assumption that the latter will have more incentives to comply with agreed-upon rules.Footnote 27
As a matter of fact, on 31 May 2016, the Commission adopted a Code of Conduct on countering illegal hate speech online, signed by the biggest players in the online market: Facebook, Google, Microsoft, and Twitter.Footnote 28 The Code of Conduct requires that the IT company signatories to the code adapt their internal procedures to guarantee that ‘they review the majority of valid notifications for removal of illegal hate speech in less than 24 hours and remove or disable access to such content, if necessary’.Footnote 29 Moreover, according to the Code of Conduct, the IT companies should provide for a removal notification system which allows them to review the removal requests ‘against their rules and community guidelines and, where necessary, national laws transposing the Framework Decision 2008/913/JHA’.
As is evident, the approach taken by the European Commission is more focused on the timely removal of the allegedly hate speech than on the procedural guarantees that such private enforcement mechanism should adopt in order not to unreasonably limit the freedom of speech of users. The most recent evaluation of the effects of the Code of conduct on hate speech shows an increased number of notifications that have been evaluated and eventually led to the removal of hate speech content within an ever-reduced time frame.Footnote 30
In order to achieve such results, the signatory companies adopted a set of technological tools assessing and evaluating the content uploaded on their platforms. In particular, they finetuned their algorithms in order to detect potentially harmful content.Footnote 31 According to the figures provided by the IT companies regarding the flagged content, human labour alone may not achieve such task.Footnote 32 However, such algorithms may only flag content based on certain keywords, which are continuously updated, but they always lag behind the evolution of the language. And, most importantly, they may still misinterpret context-dependent wording.Footnote 33 Hate speech is a type of language that is highly context sensitive, as the same word may radically change its meaning if used at different places over time. Moreover, algorithms may be improved and trained in one language, but not in other languages which are less prominent in online communication. As a result, an algorithm that works only through the classifications of certain keywords cannot attain the level of complexity of human language and runs the risk of producing unexpected false positives and negatives in the absence of context.Footnote 34
15.2.2 The Human Intervention in Hate Speech Detection and Removal
One of the strategies able to reduce the risk of structural over-blocking is the inclusion of some human involvement in the identification and analysis of potential hate speech content.Footnote 35 Such human involvement can take different forms, either internal content checking or external content checking.Footnote 36
In the first case, IT companies allocate to teams of employees the task of verifying the sensitive cases, where the algorithm was not able to single out if the content is contrary to community standards or not.Footnote 37 Given the high number of doubtful cases, the employees are subject to a stressful situation.Footnote 38 They are asked to evaluate in a very short time frame the potentially harmful content, in order to provide a decision regarding the opportunity to take the content down. This will then provide additional feedback to the algorithm, which will learn the lesson. In this framework, the algorithms automatically identify pieces of potentially harmful content, and the people tasked with confirming this barely have time to make a meaningful decision.Footnote 39
The external content checking instead involves the ‘trusted flaggers’ – that is, an individual or entity which is considered to have particular expertise and responsibilities for the purposes of tackling hate speech. Examples for such notifiers can range from individual or organised networks of private organisations, civil society organisations, and semi-public bodies, to public authorities.Footnote 40
For instance, YouTube defines trusted flaggers as individual users, government agencies, and NGOs that have identified expertise, (already) flag content frequently with a high rate of accuracy, and are able to establish a direct connection with the platform. It is interesting to note that YouTube does not fully delegate the content detection to trusted notifiers but rather affirms that ‘content flagged by Trusted Flaggers is not automatically removed or subject to any differential policy treatment – the same standards apply for flags received from other users. However, because of their high degree of accuracy, flags from Trusted Flaggers are prioritized for review by our teams’.Footnote 41
15.3 The Open Questions in the Collaboration between Algorithms and Humans
The added value of the human intervention in the detection and removal of hate speech is evident; nonetheless, concerns may still emerge as regards such an involvement.
15.3.1 Legal Rules versus Community Standards
As hinted previously, both algorithms and humans involved in content detection and removal of hate speech evaluate content vis-à-vis the community standards adopted by each platform. Such distinction is clearly affirmed also in the YouTube trusted flaggers programme, where it is affirmed that ‘the Trusted Flagger program exists exclusively for the reporting of possible Community Guideline violations. It is not a flow for reporting content that may violate local law. Requests based on local law can be filed through our content removal form’.
These standards, however, do not fully overlap with the legal definition provided by EU law, pursuant to the Framework Decision 2008/913/JHA.
Table 15.1 shows that the definitions provided by the IT companies widen the scope of the prohibition on hate speech to sex, gender, sexual orientation, disability or disease, age, veteran status, and so forth. This may be interpreted as the achievement of a higher level of protection. However, the width of the definition is not always coupled with a subsequent detailed definition of the selected grounds. For instance, the YouTube community standards list the previously mentioned set of attributes, providing some examples of hateful content. But the standard only sets two clusters of cases: encouragement towards violence against individuals or groups based on the attributes, such as threats, and the dehumanisation of individuals or groups (for instance, calling them subhuman, comparing them to animals, insects, pests, disease, or any other non-human entity).Footnote 45 The Facebook Community policy provides for a better example, as it includes a more detailed description of the increasing levels of severity attached to three tiers of hate speech content.Footnote 46 In each tier, keywords are provided to show the type of content that will be identified (by the algorithms) as potentially harmful.
Facebook definitionFootnote 42 | YouTube definitionFootnote 43 | Twitter definitionFootnote 44 | Framework Decision 2008/913/JHA |
---|---|---|---|
| Hate speech refers to content that promotes violence against or has the primary purpose of inciting hatred against individuals or groups based on certain attributes, such as:
| Hateful conduct: You may not promote violence against or directly attack or threaten other people on the basis of race, ethnicity, national origin, sexual orientation, gender, gender identity, religious affiliation, age, disability, or serious disease. We also do not allow accounts whose primary purpose is inciting harm towards others on the basis of these categories. |
|
As a result, the inclusion of such wide hate speech definitions within the Community Guidelines or Standards become de facto rules of behaviour for users of such services.Footnote 47 The IT companies are allowed to evaluate a wide range of potentially harmful content published on their platforms, though this content may not be illegal according to the Framework Decision 2008/914/JHA.
This has two consequences. First, there is an extended privatisation of enforcement as regards those conducts that are not covered by legal provisions with the risk of an excessive interference with the right to freedom of expression of users.Footnote 48 Algorithms deployed by IT companies will then have the power to draw the often-thin line between legitimate exercise of the right to free speech and hate speech.Footnote 49
Second, the extended notion of harmful content provided by community rules imposes a wide obligation on platforms regarding the flow of communication. This may conflict with the liability regime adopted pursuant relevant EU law, namely the e-Commerce Directive, which imposes a three-tier distinction across intermediary liability and, most importantly, prohibits any general monitoring obligation over ISP pursuant art. 15.Footnote 50 As it will be addressed later, in the section on liability, striking the balance between sufficient incentives to block harmful content and over-blocking effects is crucial to safeguard the freedom of expression of users.
15.3.2 Due Process Guarantees
As a consequence of the previous analysis, the issue of procedural guarantees of users emerges.Footnote 51 A first question is related to the availability of internal mechanisms that allow users to be notified about potentially harmful content, to be heard, and to review or appeal against the decisions of IT companies. Although the strongest position safeguarding freedom of expression and fair trial principle would suggest that any restriction (i.e., any removal of potentially harmful content) should be subject to judicial intervention,Footnote 52 the number of decisions adopted on a daily basis by IT companies does not allow either the intervention of potential victims and offenders, or the judicial system. It should be noted that the Code of Conduct does not provide for any specific requirement in terms of judicial procedures, nor through alternative dispute resolution mechanisms, thus it is left to the IT companies to introduce an appeal mechanism.
Safeguards to limit the risk of removal of legal content are provided instead in the Commission Recommendation on Tackling Illegal Content Online,Footnote 53 which includes within the wider definition of illegal content also hate speech.Footnote 54 The Recommendation points to automated content detection and removal and underlines the need for counter-notice in case of removal of legal content. The procedures involve the exchange between the user and the platform, which should provide a reply: in case of evidence provided by the user that the content may not be qualified as illegal, the platform should restore the content that was removed without undue delay or allow for a re-upload by the user; whereas, in case of a negative decision, the platform should include reasons for said decision.
Among the solutions, the signatories to the Code of Conduct proposed Google provides for a review mechanism, allowing users to present an appeal against the decision to take down any uploaded content.Footnote 55 Then, the evaluation of the justifications provided by the user is processed internally and the final decision is sent afterward to the user, with limited or no explanation.
A different approach is adopted by Facebook. In September 2019, the social network announced the creation of an ‘Oversight Board’.Footnote 56 The Board has the task of providing the appeals for selected cases that address potentially harmful content. Although the detailed regulation concerning the activities of the board is still to be drafted, it is clear that it will not be able to review all the content under appeal.Footnote 57 Although this approach has been praised by scholars, several questions remain open: the transparency in the selection of the people entrusted with the role of adjudication, the type of explanation for the decision taken, the risk of capture (in particular for the oversight board), and so on. And, at the moment, these questions are still unanswered.
15.3.3 Selection of Trusted Flaggers
As mentioned previously in Section 15.2.2., the intervention of trusted flaggers in content detection and removal became a crucial element in order to improve the results of said process. The selection process to identify and recruit trusted flaggers, however, is not always clear.
According to the Commission Recommendation, the platforms should ‘publish clear and objective conditions’ for determining which individuals or entities they consider as trusted flaggers. These conditions include expertise and trustworthiness, and also ‘respect for the values on which the Union is founded as set out in Article 2 of the Treaty on European Union’.Footnote 58
Such a level of transparency does not match with the practice: although the Commission Monitoring exercise provides for data regarding at least four IT companies, with a percentage of notifications received by users vis-à-vis trusted flaggers as regards hate speech,Footnote 59 apart from the previously noted YouTube programme, none of the other companies provide a procedure for becoming a trusted flagger. Nor is any guidance provided on whether the selection of trusted notifiers is a one-time accreditation process or rather an iterative process whether the privilege is monitored and can be withdrawn.Footnote 60
This issue should not be underestimated, as the risk of rubberstamping the decisions of trusted flaggers may lead to over-compliance and excessive content takedown.Footnote 61
15.3.4 Liability Regime
When IT companies deploy algorithms and recruit trusted flaggers in order to proactively detect and remove potentially harmful content, they may run the risk of losing their exemption of liability according to the e-Commerce Directive.Footnote 62 According to art. 14 of the Directive, hosting providers are exempted from liability when they meet the following conditions:
– Service providers provide only for the storage of information at the request of third parties;
– Service providers do not play an active role of such a kind as to give it knowledge of, or control over, that information.
According to the decision of the CJEU in L’Oréal v. eBay,Footnote 63 the Court of Justice clarified that whenever an online platform provides for the storage of content (in the specific case offers for sale), sets the terms of the service, and receives revenues from such service, this does not change the position of the hosting provider denying the exemptions from liability. In contrast, this may happen when the hosting provider ‘has provided assistance which entail, in particular optimising the presentation of the offers for sale in question or promoting those offers’.
This indicates that the active role of the hosting provider is only to be found when it intervenes directly in user-generated content.Footnote 64 If the hosting provider adopts technical measures to detect and remove hate speech, does it fail its neutral position vis-à-vis the content?
The liability exemption may still apply only if two other conditions set by art. 14 e-Commerce Directive apply. Namely,
– hosting providers do not have actual knowledge of the illegal activity or information and, as regards claims for damages, are not aware of facts or circumstances from which the illegal activity or information is apparent; or
– upon obtaining such knowledge or awareness, they act expeditiously to remove or to disable access to the information.
It follows that proactive measures taken by the hosting provider may result in that platform obtaining knowledge or awareness of illegal activities or illegal information, which could thus lead to the loss of the liability exemption. However, if the hosting provider acts expeditiously to remove or to disable access to content upon obtaining such knowledge or awareness, it will continue to benefit from the liability exemption.
From a different perspective, it is possible that the development of technological tools may lead to a reverse effect as regards monitoring obligations applied over IT companies. According to art. 15 of the e-Commerce Directive, no general monitoring obligation may be imposed on hosting providers as regards illegal content. But in practice, algorithms may already deploy such tasks. Would this indirectly legitimise monitoring obligations applied by national authorities?
This is the question posed by an Austrian court to the CJEU as regards hate speech content published on the social platform Facebook.Footnote 65 The preliminary reference addressed the following case: in 2016, the former leader of the Austrian Green Party, Eva Glawischnig-Piesczek was the subject of a set of posts published on Facebook by a fake account. The posts included rude comments, in German, about the politician, along with her image.Footnote 66
Although Facebook complied with the injunction of the First Instance court across the Austrian country, blocking access to the original image and comments, the social platform appealed against the decision. After the appeal decision, the case achieved the Oberste Gerichtshof (Austrian Supreme Court). Upon analysing the case, the Austrian Supreme Court affirmed that Facebook can be considered as an abettor to the unlawful comments; thus it may be required to take steps so as to repeat the publication of identical or similar wording. However, in this case, the injunction regarding such a pro-active role for Facebook could indirectly impose a monitoring role, which is in conflict not only with art. 15 of the e-Commerce Directive but also with the previous jurisprudence of the CJEU. Therefore, the Supreme Court decided to stay the proceedings and present a preliminary reference to the CJEU. The Court asked, in particular, whether art. 15(1) of the e-Commerce Directive precludes the national court to make an order requiring a hosting provider, who has failed to expeditiously remove illegal information, not only to remove the specific information but also other information that is identical in wording.Footnote 67
The CJEU decided the case in October 2019. The decision argued that as Facebook was aware of the existence of illegal content on its platform, it could not benefit from the exemption of liability applicable pursuant to art. 14 of the e-Commerce Directive. In this sense, the Court affirmed that, according to recital 45 of the e-Commerce Directive, national courts cannot be prevented from requiring a host provider to stop or prevent an infringement. The Court then followed the interpretation of the AG in the case,Footnote 68 affirming that no violation of the prohibition of monitoring obligation provided in art. 15(1) of the e-Commerce Directive occurs if a national court orders a platform to stop and prevent illegal activity if there is a genuine risk that the information deemed to be illegal can be easily reproduced. In these circumstances, it was legitimate for a Court to prevent the publication of ‘information with an equivalent meaning’; otherwise the injunction would be simply circumvented.Footnote 69
Regarding the scope of the monitoring activity allocated to the hosting provider, the CJEU acknowledged that the injunction cannot impose excessive obligations on an intermediary and cannot require an intermediary to carry out an independent assessment of equivalent content deemed illegal, so automated technologies could be exploited in order to automatically detect, select, and take down equivalent content.
The CJEU decision tries as much as possible to provide a balance between freedom of expression and freedom to conduct a business, but the wide interpretation of art. 15 of the e-Commerce Directive can have indirect negative effects, in particular when looking at the opportunity for social networks to monitor through technological tools the upload of identical or equivalent information.Footnote 70 This approach safeguards the incentives for hosting providers to verify the availability of harmful content without incurring additional levels of liability. However, the use of technical tools may pave the way to additional cases of false positives, as they may remove or block content that is lawfully used, such as journalistic reporting on a defamatory post – thus opening up again the problem of over-blocking.
15.4 Concluding Remarks
Presently, we are witnessing an intense debate about technological advancements in algorithms and their deployment in various domains and contexts. In this context, content moderation and communication governance on digital platforms have emerged as a prominent but increasingly contested field of application for automated decision-making systems. Major IT companies are shaping the communication ecosystem in large parts of the world, allowing people to connect in various ways across the globe, but also offering opportunities to upload harmful content. The rapid growth of hate speech content has triggered the intervention of national and supranational institutions in order to restrict such unlawful speech online. In order to overcome the differences emerging at the national level and enhance the opportunity to engage international IT companies, the EU Commission adopted a co-regulatory approach inviting the same table regulators and regulates, so as to defined shared rules.
This approach has the advantage of providing incentives for IT companies to comply with shared rules, as long as non-compliance with voluntary commitments does not lead to any liability or sanction. Thus the risk of over-blocking may be avoided or at least reduced. Nonetheless, considerable incentives to delete not only illegal but also legal content exist. The community guidelines and standards presented herein show that the definition of hate speech and harmful content is not uniform, and each platform may set the boundaries of such concepts differently. When algorithms apply criteria defined on the basis of such different concepts, they may unduly limit the freedom of speech of users, as they will lead to the removal of legal statements.
The Commission approach explicitly demands proactive monitoring: ‘Online platforms should, in light of their central role and capabilities and their associated responsibilities, adopt effective proactive measures to detect and remove illegal content online and not only limit themselves to reacting to notices which they receive’. But this imposes de facto monitoring obligations which may be carried out through technical tools, which are far from being without flaws and bias.
From the technical point of view, the introduction of the human in the loop, such as in the cases of trusted flaggers or the Facebook Oversight board, does not reduce the questions of effectiveness, accessibility, and transparency of the mechanisms adopted. Both strategies, however, show that some space for stronger accountability mechanisms can be found, though the path to be pursued is still long.
16.1 Introduction
Technological advancements and cyberspace have forced us to reconsider the existing limitations of private autonomy. Within the field of contract law, according to regulatory strategies, the public dimension affects private interests in several ways. These include the application of mandatory rules and enforcement mechanisms capable of obtaining certain results and granting a sufficient level of effectiveness. This is particularly the case in European contract law, where the law pursues regulatory goals related to the establishment and the enhancement of a common European market.Footnote 1
The digital dimension represents a severe challenge for European and national private law.Footnote 2 In order to address the implications of the new technologies on private law, recent studies were conducted inter alia on algorithmic decisions, digital platforms, the Internet of Things, artificial intelligence, data science, and blockchain technology. The broader picture seems to indicate that, in the light of the new technologies, the freedom to conduct business has often turned into power. Digital firms are no longer only market participants: rather, they are becoming market makers capable of exerting regulatory control over the terms on which others can sell goods and services.Footnote 3 In so doing, they are replacing the exercise of states’ territorial sovereignty with functional sovereignty. This situation raised concern in different areas of law and recently also in the field of competition law.Footnote 4
As Lawrence Lessig pointed out, in the mid-1990s, cyberspace became a new target for libertarian utopianism where freedom from the state would reign.Footnote 5 According to this belief, the society of this space would be a fully self-ordering entity, cleansed of governors and free from political hacks. Lessig was not a believer of the described utopian view. He correctly pointed out the need to govern cyberspace, as he understood that left to itself, cyberspace would become a perfect tool of ‘Control. Not necessarily control by government.’Footnote 6 These observations may be connected to the topic of private authorities who exercise power over other private entities with limited control by the state. The issue was tackled in a study by an Italian scholar which is now more than forty years old,Footnote 7 and more recently by several contributions on different areas of private law.Footnote 8 The emergence of private authorities was also affirmed in the context of global governance.Footnote 9 These studies were able to categorize forms and consequences of private authorities, to identify imbalances of power, envisage power-related rules of law, and question the legitimacy of private power. One of the main problems is that private authorities can be resistant to the application and enforcement of mandatory rules.
The present chapter aims to investigate whether and how blockchain technology platforms and smart contracts could be considered a modern form of private authority, which at least partially escapes the application of mandatory rules and traditional enforcement mechanisms.Footnote 10 Blockchain technology presents itself as democratic in nature, as it is based on an idea of radical decentralization.Footnote 11 This is in stark contrast to giant Big Tech corporations working over the internet in the fields of social networking, online search, online shopping, and so forth; with blockchain, technology users put their trust in a network of peers. Nevertheless, as happened with the internet, market powers could create monopolies or highly imbalanced legal relationships.Footnote 12 In this sense, contractual automation seems to play a key role in understanding the potentialities and the risks involved in the technology. In general terms, one of the main characteristics of a smart contract is its self-executing character, which should eliminate the possibility of a breach of contract. But smart contracts may also provide for effective self-help against breaches of traditional contracts. Finally, when implemented on blockchain platforms, smart contract relationships may also benefit from the application of innovative dispute resolution systems, which present themselves as entirely independent from state authorities.
16.2 Smart Contracts: Main Characteristics
In his well-recognized paper entitled ‘Formalizing and Securing Relationships on Public Networks’, Nick Szabo described how cryptography could make it possible to write computer software able to resemble contractual clauses and bind parties in a way that would almost eliminate the possibility of breaching an agreement.Footnote 13 Szabo’s paper was just a first step, and nowadays basically every scholar interested in contract law may expound on the essentials of how a smart contract functions. Some jurisdictions, such as in Italy, have also enacted rules defining a smart contract.Footnote 14 The great interest is due to the growing adoption of Bitcoin and other blockchain-based systems, as for instance Ethereum.Footnote 15 The latter provides the necessary technology to carry out Szabo’s ideas.
Smart contracts do not differ too greatly from natural language agreements with respect to the parties’ aims or interests.Footnote 16 In reality, except where the decision to conclude the contract is taken by an ‘artificial intelligent agent’, they solely form a technological infrastructure that makes transactions cheaper and safer.Footnote 17 The main quality of a smart contract relies on the automation of contractual relationships, as the performance is triggered by an algorithm in turn triggered by the fulfilment of certain events. In this sense, there is often talk of a distinction between the notions of ‘smart contract’ and ‘smart legal contract’ with the result that contractual automation in the majority of cases affects only its performance.Footnote 18 In contrast, the contract as such (i.e., the legal contract) is still a product of the meetings of the minds, through an offer and an acceptance.Footnote 19 In many cases, this induces parties to ‘wrap’ the smart contract in paper and to ‘nest’ it in a certain legal system.Footnote 20
It is therefore often argued that ‘smart contract’ is a misnomer as the ‘smart’ part of the contract in reality affects only the performance.Footnote 21 In addition, smart contracts are not intelligent but rely on an ‘If-Then’ principle, which means, for instance, that a given performance will be executed only when the agreed-upon amount of money is sent to the system.Footnote 22 These critics seem to be correct, and this goes some way to demystifying the phenomenon,Footnote 23 which is sometimes described as a game-changer that will impact every contractual relationship.Footnote 24 Discussions are beginning to be held on automated legal drafting, through which contractual clauses are shaped on the basis of big data by machine learning tools and predictive technologies, but for now, they do not really affect the emerging technology of smart contracts on blockchain platforms.Footnote 25 The latter work is based on rather simple software protocols and other code-based systems, which are programmed ex ante without the intervention of artificial intelligence.Footnote 26
Nevertheless, the importance of the ‘self-executing’ and ‘self-enforcing’ character of smart contracts should not be undermined. Most of the benefits arising from the new technology are in fact based on these two elements, which represent a source of innovation for general contract law. The ‘self-executing’ character should eliminate the occurrence of contractual breaches, whereas the ‘self-enforcing’ character makes it unnecessary to turn to the courts in order to obtain legal protection.Footnote 27 In addition, the code does not theoretically require interpretation, as it should not entail the need to explain ambiguous terms.Footnote 28 Currently, it is not clear whether smart contracts will diminish transaction costs, due to the complexity of digital solutions and the need to acquire the necessary knowledge.Footnote 29 For reasons that will be outlined, costs of implementation seem not to harm the potential spread of smart contracts, especially in the fields of consumer contracts and the Internet of Things.
16.3 Self-Execution and Self-Enforcement
As stated before, through the new technology one or more aspects of the contract’s execution become automated, and having once entered into the contract, parties cannot prevent performance from being executed. Smart contracts use blockchain to ensure the transparency of the contractual relationship and to create trust in the capacity to execute the contract, which depends on the involved technology. As previously stated, the operation is based on ‘If-Then’ statements, which are one of the most basic building blocks of any computer program.
Undeniably, such a technology can easily govern the simple contractual relationship, in which the system has only to determine where a given amount of money has been paid in order to have something in return (e.g., a digital asset) or where the performance is due when certain external conditions of the real world are met. Since a modification of the contractual terms of a smart contract implemented on a blockchain platform is hardly possible, execution appears certain and personal trust or confidence in the counterparty is not needed.Footnote 30 This has led to the claim that in certain situations, contracting parties will face the ‘cost of inflexibility’, as blockchain-based smart contracts are difficult to manipulate and therefore resistant to changes.Footnote 31 In fact, smart contracts are built on the assumption that there will not be modifications after the conclusion of the contract. As a result, if or when circumstances relevant to the smart contract change, a whole new contract would need to be written.
‘Inflexibility’ is often considered a weakness of smart contracts.Footnote 32 Supervening events and the change of circumstances may require parties to intervene in the contractual regulation and provide for some amendments.Footnote 33 Therefore, legal systems contain rules that may lead to a judicial adaptation of the contract, sometimes through a duty to renegotiate its content.Footnote 34 In this regard, smart contracts differ from traditional contracts, as they take an ex ante view instead of the common ex post judicial assessment view of law.Footnote 35
In reality, this inflexibility does not constitute a weakness of smart contracts. Instead, it makes clear that self-execution and self-enforcement could bring substantial benefits only in certain legal relationships, where parties are interested in a simple and instantaneous exchange. Moreover, self-execution does not necessarily affect the entire agreement. Indemnity payouts, insurance triggers, and various other provisions of the contract could be automated and self-fulfilling, while other provisions may remain subject to an ordinary bargain and be expressed in natural language.Footnote 36 One can therefore correctly state that smart contracts automatically perform obligations which arise from legal contracts but not necessarily all the obligations. Finally, it should be observed that future contingencies that impact the contractual balance, as for instance an increase of the raw materials’ price, could be assessed through lines of code, in order to rationally adapt the contractual performance.Footnote 37
The latter issue makes clear that often the conditions for contractual performance relate to the real and non-digital world outside of blockchains. It is therefore necessary to create a link between the real world and the blockchain. Such a link is provided by the so-called oracles, which could be defined as interfaces through which information from the real world enters the ‘digital world’. There are different types of oracles,Footnote 38 and some scholars argue that their operation harms the self-executing character of smart contracts, because the execution is eventually remitted to an external source.Footnote 39 Due to the technology involved, oracles do not seem to impact the automated execution of smart contracts. The main challenge with oracles is that contracting parties need to trust these outside sources of information, whether they come from a website or a sensor. As oracles are usually third-party services, they are not subject to the security blockchain consensus mechanisms. Moreover, mistakes or inaccuracies are not subject to rules that govern breach of contract between the two contracting parties.
In the light of the above, self-execution and self-enforcement assure an automated performance of the contract. Nevertheless, whether due to an incorrect intervention of an oracle, for a technological dysfunction, or for an error in the programming, things may go wrong and leave contracting parties not satisfied. In these cases, there could be an interest in unwinding the smart contract. According to a recent study,Footnote 40 this can be done in three ways. Needless to say, the parties can unwind the legal contract in the old-fashioned way by refunding what they have received from the other party, be it voluntarily or with judicial coercion. At any rate, it would be closer to the spirit of fully automated contracts, if the termination of the contract and its unwinding could also be recorded in the computer code itself and thus carried out automatically.Footnote 41 Finally, it is theoretically possible to provide for technical modifications of the smart contract in the blockchain. The three options, as also argued by the author,Footnote 42 are not easily feasible and there is the risk of losing the advantages related to self-execution. It is therefore of paramount importance to devote attention to the self-help and dispute resolution mechanisms developed on blockchain-platforms.Footnote 43
16.4 Automated Self-Help
The functioning of smart contracts may also determine a new vast array of self-help tools (i.e., enforcement mechanisms that do not require the intervention of state power). The examples of self-help that have recently been discussed are related to Internet of Things technology.Footnote 44 The cases under discussion affect self-enforcement devices that automatically react in the presence of a contractual breach and put the creditor in a position of advantage with respect to that of the debtor. The latter, who is in breach, cannot exercise any legal defence vis-à-vis automated self-help based on algorithms. Scholars who addressed the issue stressed the dangers connected to a pure exercise of private power through technology.Footnote 45
Among the most frequent examples, there is the lease contract, for which a smart contract could automatically send a withdrawal communication in case of a two-month delay in the payment of the lease instalment. If the lessee does not pay the due instalment within one month, the algorithm automatically locks the door and prevents the lessee from entering into the apartment. Another example is the ‘starter interrupt device’, which can be connected to a banking loan used to buy a vehicle. If the owner does not pay the instalments, the smart contract prevents the vehicle from starting. Similar examples are present in the field of utilities (gas, electricity, etc.).Footnote 46 If the customer does not pay for the service, the utilities are no longer available. In looking to general contractual remedies, the potentiality of such self-help instruments appears in reality almost unlimited. Automation could also affect the payment of damages or liquidated damages.
Self-help devices take advantage of technology and put in the creditors’ hands an effective tool, which – at the same time – reduces the costs of enforcement and significantly enhances the effectiveness of contractual agreements. This is mainly due to the fact that recourse to a court is no longer necessary. Contractual automation may increase the awareness of the importance of fulfilling obligations in time. Moreover, the reduction of costs related to enforcement may lead to a decrease in prices for diligent contracting parties. At any rate, as correctly pointed out, the described ‘technological enforcement’ – although effective – does not necessarily respect the requirements set by the law.Footnote 47 In other words, even if smart contracts are technologically enforceable, they are not necessarily also legally enforceable.Footnote 48 In the examples outlined previously, it is possible to imagine a withdrawal from the contract without due notice or the payment of an exorbitant sum of money as damages.
How should the law react to possible deviations between the code and the law? It seems that a kind of principle of equivalent treatment should provide guidance to resolving cases:Footnote 49 limits that exist for the enforcement of traditional contracts should be extended to smart contracts. From a methodological point of view, practical difficulties in applying the law should not prevent an assessment of the (un)lawful character of certain self-help mechanisms. In cases where the law provides for mandatory proceedings or legal steps in order to enforce a right, the same should in principle apply to smart contracts.
Nevertheless, evaluation of the self-help mechanisms’ lawfulness should not be too strict, and should essentially be aimed at protecting fundamental rights – for instance, the right to housing. The ‘automated enforcement’ relies on party autonomy and cannot be considered as an act of oppression exercised by a ‘private power’ per se. Therefore, apart from the protected rights, the assessment should also involve the characteristics of the contracting parties and the subject matter of the contract. In this regard, it was correctly pointed out that EU law provides for some boundaries of private autonomy in consumer contracts, which apply to smart contracts.Footnote 50
For instance, the unfair terms directiveFootnote 51 indicates that clauses, which exclude or hinder a consumer’s right to take legal action, may create a significant imbalance in parties’ rights and obligations.Footnote 52 The same is stated with respect to clauses irrevocably binding the consumer to terms with which she or he had no real opportunity of becoming acquainted before the conclusion of the contract.Footnote 53 According to prevailing opinion, the scope of application of the unfair terms directive also covers smart contracts, even if the clauses are expressed through lines of code.Footnote 54
Undeniably, smart contracts may pose difficulties to consumers when it comes to exercising a right against illicit behaviour on behalf of the business. At any rate, it would not be proper to consider the self-help systems directly unlawful. The enforcement of EU consumer law is also granted by public authorities,Footnote 55 which in the future may exercise control with respect to the adopted contractual automation processes and require modifications in the computer protocol of the businesses. If the self-help reacts to a breach of the consumer, it should not in principle be considered unfair. On the one hand, contractual automation may provide for lower charges, due to the savings in enforcement costs. On the other hand, it could augment the reliability of consumers by excluding opportunistic choices and making them immediately aware of the consequences of the breach. Finally – as will be seen – technological innovation must not be seen only as a menace for consumers, as it could also provide for an improvement in the application of consumer law and, therefore, an enhancement of its level of effectiveness.Footnote 56
16.5 Automated Application of Mandatory Rules
A huge debate has affected the application of mandatory rules in the field of smart contracts. The risk that this innovative technology could be used as an instrument to fulfil unlawful activities, as the conclusion of immoral or criminal contracts, is often pointed out.Footnote 57 The mode of operation may render smart contracts and blockchain technology attractive to ill-intentioned people interested in engaging in illicit acts.
Among the mandatory rules that may be infringed by smart contracts, special attention is dedicated to consumer law.Footnote 58 The characteristics of smart contracts make them particularly compatible with the interests of individual businesses in business-to-consumer relationships, as blockchain technology can guarantee a high level of standardization and potentially be a vehicle for the conclusion of mass contracts. In terms of the application of mandatory consumer law to smart contracts, opinions differ significantly. According to one author, smart contracts will determine the end of consumer law, as they may systematically permit businesses to escape its application.Footnote 59 The claim has also been made that automated enforcement in the sector of consumer contracts amounts to an illusion, as mandatory rules prevent the use of automated enforcement mechanisms.Footnote 60
Both opinions seem slightly overstated and do not capture the most interesting aspect related to smart consumer contracts. In fact, as has been recently discussed, technology and contractual automation may also be used as a tool to enforce consumer law and augment its level of effectiveness.Footnote 61 Many consumers are indeed not aware of their rights or, even if they are, find it difficult to enforce them, due to emerging costs and a lack of experience. In addition, most consumer contractual claims are of insignificant value.
In this regard, a very good example is given by the EU Regulation on Compensation of Long Delay of Flights.Footnote 62 The consumer has a right to get a fixed compensation, depending on the flight length, ranging from 125,00 to 600,00 euros. For the reasons outlined previously, what often happens is that consumers do not claim compensation; the compensation scheme thus lacks effectiveness. In the interest of consumers, reimbursement through a smart contract device has been proposed to automate the process.Footnote 63 The latter would work on the basis of a reliable system of external interfaces.Footnote 64 The proposal seems feasible and is gaining attention, especially in Germany, where the introduction of the smart compensation scheme in cases of cancellations or delays of flights has been discussed in Parliament.Footnote 65
Two possible drawbacks are related to the described types of legislative intervention. Due to the wide distribution of the technology, which crosses national borders, the adoption of smart enforcement may produce strong distortions to international competition.Footnote 66 For instance, the imposition of a smart compensation model as the one discussed in Germany for the delay or the cancellation of flights may lead to an increase in the costs for flight companies that operate predominantly in that country. In order not to harm the aims of the internal market, smart enforcement should thus be implemented on a European level.
Another danger of the proposed use of smart contract devices is ‘over-enforcement’.Footnote 67 The latter may be detrimental because it could prevent businesses from running an activity in order to escape liability and sanctions. The described adoption of technology in cases of flight delays may determine a digitalization of enforcement that drastically drops the rate of unpaid compensations to almost zero. The outlined scenario is not necessarily convenient for consumers, as the additional costs sustained by flight companies would probably be passed on to all customers through an increase in prices. The level of technology required to automatically detect every single delay of an airplane, and grant compensation to the travellers would probably lead to an explosion in costs for companies. While this may increase efficiency in the sector, it is questionable whether such a burden would be bearable for the flight companies. That is not to say that this risk automatically means strict enforcement is inherently evil: enforcement of existing rules is of course a positive aspect. Nevertheless, the economic problems it may give rise to should lead to the consideration of enforcement through technological devices as an independent element that could in principle also require modifications of substantive law.Footnote 68 For instance, the technology could enable recognition of ‘tailored’ amounts of compensation depending on the seriousness of the delay.Footnote 69
Many aspects seem uncertain, and it is not surprising that as things stand, smart enforcement mechanisms are not (yet) the core of legislative intervention.Footnote 70 In reality, the current regulatory approach appears quite the opposite. Legislators are not familiar with the new technologies and are tending towards lightening the obstacles set by mandatory rules to blockchain technology with the aim of not harming its evolution.Footnote 71 In many legal systems, contained ‘regulatory sandboxes’ were created,Footnote 72 in order to support companies exercising their activities in the fields of fintech and blockchain technology. In general terms, regulatory sandboxes enable companies to test their products with real customers in an environment that is not subject to the full application of legal rules. In this context, regulators typically provide guidance, with the aim of creating a collaborative relationship between the regulator and regulated companies. The regulatory sandbox can also be considered a form of principles-based regulation because it lifts some of the more specific regulatory burdens from sandbox participants by affording flexibility in satisfying the regulatory goals of the sandbox.Footnote 73 The described line of reasoning shows the willingness of legislators not to prevent technological progress and to help out domestic companies. The approach inevitably brings clashes when it comes to the protection of consumers’ interests.Footnote 74
16.6 Smart Contracts and Dispute Resolution
Even if the claim ‘code is law’ or the expression ‘lex cryptographica’Footnote 75 may appear exaggerated, it seems evident that developers of smart contracts and blockchain platforms are aiming to create an order without law and implement a private regulatory framework. Achieving such a goal requires shaping a model of dispute resolution capable of resolving conflicts in an efficient manner, without the intervention of national courts and state power.Footnote 76 The self-executing character of smart contracts may not prevent disputes occasionally arising between parties, connected for instance to defects in the product purchased or to the existence of an unlawful act. Moreover, the parties’ agreement cannot always be encoded in ‘if-then’ statements and should be encompassed in non-deterministic notions and general clauses such as, for example, good faith and reasonableness. Unless artificial intelligence develops to the stage where a machine can substitute human reasoning in filling gaps of the contract or putting into effect general clauses,Footnote 77 contractual disputes may still arise. The way smart contracts operate could lead parties to abandon the digital world and resolve their disputes off-chain. The issue is of high importance, as the practical difficulties of solving possible disputes between the parties could obscure the advantages connected to contractual automation.Footnote 78
On this, one of the starting points in the discussions about dispute resolution in the field of blockchain is the observation that nowadays regular courts are not well enough equipped to face the challenges arising from the execution of lines of code.Footnote 79 This claim could perhaps be correct at this stage, but it does not rule out courts acquiring the capacity to tackle such disputes in the future. In reality, a part of the industry is attempting to realize a well-organized and completely independent jurisdiction in the digital world through the intervention of particular types of oracles, which are usually called ‘adjudicators’ or ‘jurors’.Footnote 80
Whether such a dispute resolution model can work strictly depends on the coding of the smart contract. As seen before,Footnote 81 once a smart contract is running, in principle neither party can stop the protocol, reverse an already executed transaction, or otherwise amend the smart contract. Therefore, the power to interfere with the execution of the smart contract should be foreseen ex ante and be granted to a trusted third party. The latter is allowed to make determinations beyond the smart contracts’ capabilities. It will feed the smart contract with information and, if necessary, influence its execution in order to reflect the trusted third parties’ determination.Footnote 82
Independence from the traditional judiciary is granted by ‘routine escrow mechanisms’. Rather than paying the sale price directly to the seller, the latter is kept in escrow by a third party. If no disputes arise from the contract, the funds held in escrow will be unblocked in favour of the seller.Footnote 83 Nowadays, platforms adopt sophisticated systems based on ‘multi-signature addresses’, which do not really give exclusive control of the price to the third party involved as an adjudicator.Footnote 84 This should amount to an additional guarantee in favour of the contracting parties.Footnote 85 The outcome is a kind of advanced ODR system,Footnote 86 which is particularly suitable in the high-volume, low-value consumer complaints market.Footnote 87
The autonomous dispute resolution system is not considered a modern form of the judiciary.Footnote 88 It is presented as a return to the ancient pre-Westphalian past, where jurisdiction did not usually emanate from state sovereignty but from a private service, largely based on the consent of the disputing parties. Nevertheless, given the development of the modern state judiciary, there are many problematic aspects related to dispute resolution on blockchain platforms. For instance, it has been pointed out that: the decision is granted by subjects who do not necessarily have a juridical knowledge (often selected through a special ranking based on the appreciation of users), the decision cannot be recognized by a state court as happens with an arbitral award, and that enforcement does not respect time limits and safeguards provided by regular enforcement proceedings.Footnote 89
With respect to the aforementioned issues, the fear is that such advanced ODR systems based on rules which are autonomous from the ones of national legal systems may limit the importance of the latter in regulating private relationships.Footnote 90 On the other hand, some authors affirm that such procedures, under certain conditions, may become a new worldwide model of arbitration.Footnote 91
Also, in this case, the advantages of the dispute resolution procedures are strictly connected to the self-enforcement character of the decision. The legitimacy of such proceedings must be carefully assessed; the outcome should not necessarily be considered unlawful. The parties voluntarily chose to be subject to the scrutiny of the adjudicator, and from a private law perspective, the situation does not differ significantly from the case of a third arbitrator that determines the contents of the contract. In addition, the scope of automated enforcement does not tackle the entire estate; the assets that are subject to the assignment decided by the adjudicator are made available by the parties on purpose. It is not yet clear how far such proceedings will spread or whether they could functionally substitute for state court proceedings. Needless to say, in the absence of a specific recognition made by legal rules, these dispute resolution mechanisms are subject to the scrutiny of state courts.Footnote 92 Although it could be difficult in practice, the party who does not agree with a decision, which is not legally recognizable, may sue the competent state court in order to have the dispute solved.
16.7 Conclusion
The actual dangers caused by the creation of private powers on blockchain platforms are related to the technology that grants automation of the contractual relationship. On the one side, if rights and legal guarantees are excluded or limited the adoption of self-enforcement devices should of course be considered unlawful. On the other side, however, in principle every situation has to be carefully assessed, as the contracting parties have freely chosen to enter into a smart contract.
Problems may exist when smart contracts are used as a means of self-help imposed by one of the contracting parties. An automated application of remedies may harm the essential interests of the debtors. Nevertheless, automation does not seem to infringe debtor’s rights if enforcement is compliant with deadlines and legal steps provided by the law. Moreover, some economic advantages arising from automation may produce positive effects for whole categories of users and self-enforcement could also become an efficient tool in the hands of the European legislator, in order to significantly augment the effectiveness of consumer protection.
In the light of the issues examined herein, if the technology wishes to augment user trust about the functioning of smart contracts and blockchain, it should not aim to abandon the law.Footnote 93 To be successful in the long run, innovative enforcement and dispute resolution models should respect and emulate legal guarantees. Smart contracts are not necessarily constructed with democratic oversight and governance, which are essential for a legitimate system of private law.Footnote 94 A widespread acceptance of new services requires that the main pillars on which legal systems are based should not be erased.