Skip to main content Accessibility help
Hostname: page-component-544b6db54f-jcwnq Total loading time: 0.384 Render date: 2021-10-18T18:26:12.255Z Has data issue: true Feature Flags: { "shouldUseShareProductTool": true, "shouldUseHypothesis": true, "isUnsiloEnabled": true, "metricsAbstractViews": false, "figures": true, "newCiteModal": false, "newCitedByModal": true, "newEcommerce": true, "newUsageEvents": true }

On the Governance of Artificial Intelligence through Ethics Guidelines

Published online by Cambridge University Press:  02 October 2020

Lund University
Rights & Permissions[Opens in a new window]


This article uses a socio-legal perspective to analyze the use of ethics guidelines as a governance tool in the development and use of artificial intelligence (AI). This has become a central policy area in several large jurisdictions, including China and Japan, as well as the EU, focused on here. Particular emphasis in this article is placed on the Ethics Guidelines for Trustworthy AI published by the EU Commission’s High-Level Expert Group on Artificial Intelligence in April 2019, as well as the White Paper on AI, published by the EU Commission in February 2020. The guidelines are reflected against partially overlapping and already-existing legislation as well as the ephemeral concept construct surrounding AI as such. The article concludes by pointing to (1) the challenges of a temporal discrepancy between technological and legal change, (2) the need for moving from principle to process in the governance of AI, and (3) the multidisciplinary needs in the study of contemporary applications of data-dependent AI.

Law and Artificial Intelligence in Asia
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (, which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.
© The Author(s), 2020


Much like Karl Renner looked for the principal content of property law in times of technology-driven societal transformation in the industrializing Western Europe,Footnote 1 contemporary society is seeking its proper forms of governance in a digital transformationFootnote 2 driven by platformization,Footnote 3 datafication,Footnote 4 and algorithmic automation.Footnote 5 Much like how Eugene Ehrlich proposed a study of the living law,Footnote 6 paralleled by Roscoe Pound’s separation of law in books from law in action,Footnote 7 contemporary governance of artificial intelligence (AI) is also separable in terms of hard and soft law.Footnote 8 This article could be read in light of these foundational socio-legal scholars shaping the sociology of law as a scientific discipline that has inspired much thought on the relationship between social change, law, and new technology.Footnote 9

In its communication from April 2018,Footnote 10 the EU adopted an explicit strategy for AI and appointed the High-Level Expert Group on AI, consisting of 52 members, to provide advice on both investments and ethical-governance issues in relation to AI in Europe. In April 2019, the expert group published the Ethics Guidelines for Trustworthy Artificial Intelligence (hereinafter the Ethics Guidelines),Footnote 11 which—despite explicitly pointing out that the guidelines do not deal with legal issues—clearly indicate issues of responsibility, transparency, and data protection as entirely central parts of the development of trustworthy AI. Over the last few years, a number of ethics guidelines have been developed relating to AI: by companies, research associations, and government representatives.Footnote 12 Many overlap in part with already-existing legislation, but it is often unclear how the legislation and guidelines are intended to interact more precisely. In particular, the way in which the standpoints in principle are intended to be implemented is often unclear. In other words, the ethics guidelines focus on normative standpoints, but are often weak from a procedural perspective. The Ethics Guidelines of the EU Commission’s expert group are a clear sign of an ongoing governance challenge for the EU and its Member States. Interestingly, during her candidature, Ursula von der Leyen, the new president of the EU Commission, stated that, during her first 100 days in office, she would “put forward legislative proposals for a coordinated European approach to the human and ethical implications of AI.”Footnote 13 Consequently, in February 2020, the European Commission issued a digital strategy including proposals for empowering excellence and trust in AI and a White Paper on AI.Footnote 14 At the same time, the EU Commission’s take on AI development and governance signifies a global trend on governmental and jurisdictional approaches to seeing both societal and industrial benefits with AI in tandem with ethical and legal concerns that need to be addressed and governed. This notion of development and governance of high-potential/high-risk have earlier been described as being “inevitably and dynamically intertwined” with regard to emerging technologies.Footnote 15

Part of the challenge for the EU, arguably, consists of balancing regulation against the trust that exists in technical innovation and societal development overall, to which AI and its methods can contribute, and which is therefore not desirable to risk undermining with unbalanced or hastily introduced regulation. As societal use and dependency on AI and machine learning are increasing, society increasingly needs to understand any negative consequences and risks, how interests and power are distributed, and what needs exist for both legal and other types of governance.

This article focuses on ethics guidelines as tools for governance, points to the interplay with legal tools for governance, and discusses the particular features of AI development that have led to ethics issues gaining such a prominent position. Particular focus is placed on the Ethics Guidelines for “trustworthy AI” as well as the Commission’s White Paper on AI. First, the article focuses on the definitional struggles around the concept of AI, in order to clarify the relationship between the definition and the governance of AI. Since the actual definition of AI is highly debatable, and may depend on the disciplinary field in which the person making the definition is based, it will arguably have an effect on its governance. Here, this article argues for the need to regard the technologies in their applied context, and in their interplay with human values and societal expressions, which is not least underlined by the dependence of machine learning on large amounts of data or examples as its foundation. Second, the key features of the ethical approach on AI governance are outlined, addressing some of its critique, with a particular focus on the EU. This must arguably be placed in a broader context of governance tools that nevertheless often share some principle-based central values relating to the control of data, the degree of reasonable transparency and how responsibility should be allocated, and a brief comparison to Chinese and Japanese guidelines are provided. Finally, the article concludes with a socio-legal perspective on ethics guidelines as a form of governance over the AI development. The governance using ethics guidelines is highly dependent on recent insights from critical AI studies about the societal challenges relating to fairness, accountability, and transparency.Footnote 16 At the same time, the governance issue must inevitably deal with temporal aspects of the difference between how legislation is formed and how rapid development has been for the underlying elements of AI.


Despite—or perhaps because of—the increased attention that AI and its developed methods are receiving in multidisciplinary research, media, and policy work, there is no clear consensus on how AI should best be defined. This seems to be the case with regard not only to public perceptions,Footnote 17 but also to computer scienceFootnote 18 and law.Footnote 19 For example, Gasser and Almeida establish that one cause of the difficulty of defining AI from a technical perspective is that AI is not a single technology, but rather “a set of techniques and subdisciplines ranging from areas such as speech recognition and computer vision to attention and memory, to name just a few.”Footnote 20 A number of definitions have been expressed, both within research and in government agency reports, but a major challenge is that the methods express a movable and changing field. I would here like to emphasize the dynamic of the concept construct as it has been discussed within traditional AI research, and also offer some central aspects that can still be highlighted, as well as show what the High-Level Expert Group is concentrating on.

In conjunction with the High-Level Expert Group publishing the Ethics Guidelines, a definition document was also published, aimed at clarifying certain aspects of AI as a scientific discipline and as a technology.Footnote 21 One purpose that it highlights is to avoid misunderstanding and to achieve a commonly shared knowledge about AI that can be used fruitfully also by non-experts, and to indicate details that may contribute to the discussion about the Ethics Guidelines. The High-Level Expert Group uses as its first starting point the definition provided in the EU Commission’s communication on AI in Europe, published in April 2018, which is then developed further:

Artificial intelligence (AI) refers to systems that display intelligent behaviour by analysing their environment and taking actions—with some degree of autonomy—to achieve specific goals.

AI-based systems can be purely software-based, acting in the virtual world (e.g. voice assistants, image analysis software, search engines, speech and face recognition systems) or AI can be embedded in hardware devices (e.g. advanced robots, autonomous cars, drones or Internet of Things applications). (The High-Level Expert Group, 2019b, p. 1)

This definition concentrates particularly on autonomy—namely that there is a measure of agency in AI systems—and points out that the systems can consist of both physical robots and also pure software systems. At the same time, the examples provide a clear indication of what they are aiming at and, extrapolating, what the governance objects of the Ethics Guidelines consist of. As a software-based category, they point to voice assistants, image-analysis software, search engines, and speech and face-recognition systems, while, for hardware-based applications, they indicate advanced robots, autonomous cars, drones, and the linked-up devices that are seen as part of the Internet of Things. As autonomy is emphasized, this can in combination be interpreted as not applying to all drones or all linked-up devices—only to those that have an autonomous or even learning element. What characterizes an “advanced” robot does not necessarily entail a simple demarcation, which we can expect to be changing over time. This is clearly a “moving target” that seems to be an inherent element of AI, sometimes described as an “odd paradox” or the “AI effect.”Footnote 22

The High-Level Expert Group also notes that an explicit part of AI is the intelligence concept, which is a particularly elusive element that has been included since the area was originated. Legg and Hutter, for example, gather together more than 70 different definitions of the intelligence concept in itself.Footnote 23 In addition to listing a number of psychological definitions, they also indicate how the definitions used in AI research have focused on different inherent aspects, with differing emphasis on problem-solving, improvement and learning over time, good performance in complex environments, or the generalizability of achieving domain-independent skills that are needed to manage a number of domain-specific problems. The intelligence concept also leads to a number of human associations, such as the ability to have feelings and self-awareness that cannot be said to be a living part of the methods and technologies that are causing the explosion of applied AI today, and thus not a central object for governance through ethics guidelines. It can therefore be established that contemporary AI primarily includes a number of technologies and analysis methods that have been gathered together under the umbrella concept of “artificial intelligence,” namely machine learning, natural language processing, image recognition, “neural networks,” and deep learning. Machine learning in particular—a field that, expressed in simple terms, is about methods for making computers “learn” based on data, without the computers having been programmed for that particular task—is a field that has developed rapidly in just the last few years through access to historically incomparable amounts of digital data and increasing analytical processing power. This has led to contemporary AI generally referring to “the computational capability of interpreting huge amounts of information in order to make a decision, and is less concerned with understanding human intelligence, or the representation of knowledge and reasoning,” according to Virginia Dignum, a professor in AI and ethics who also is a member of the High-Level Expert Group.Footnote 24

The complexity of the concept construct has led the High-Level Expert Group to put forward a fairly complex definition, which thus expands the EU Commission’s first definition. It also includes the AI functionality in its systemic context—namely the fact that it is often part of a larger whole,Footnote 25 includes the division of machine learning into structured and unstructured data, and the fact that AI systems are primarily goal-driven to achieve something that a human being has defined:

Artificial intelligence (AI) systems are software (and possibly also hardware) systems designed by humans that, given a complex goal, act in the physical or digital dimension by perceiving their environment through data acquisition, interpreting the collected structured or unstructured data, reasoning on the knowledge, or processing the information, derived from this data and deciding the best action(s) to take to achieve the given goal. AI systems can either use symbolic rules or learn a numeric model, and they can also adapt their behaviour by analysing how the environment is affected by their previous actions.Footnote 26

There are thus differing aspects of AI to be considered in the definition of AI as a challenge to regulation, where the most central ones for today’s development and use of AI tend to concern (1) autonomy/agency, (2) self-learning from large data amounts (or “adaptability”), and (3) the degree of generalizable learning. Finally, as a step towards a wider social sciences-based discussion and in light of the challenges that AI has displayed in its implementation and interaction with society’s values and structure, it can be argued that there are multidisciplinary advantages of not leaning too heavily towards a computer sciences-based definition of AI. The definition is in itself a form of conceptual control that impacts on the regulation debate, and we therefore need to both be careful and take a multidisciplinary approach when making definitions.Footnote 27


If we first look at the discussions about AI and ethics that are held in the global arena, we can establish that it is currently a lively subject among academics and policy-oriented bodies. Ethics Guidelines in particular, as a governance tool, have seen very strong development over the last few years. For example, a study of the global AI ethics landscape, published in 2019, identified 84 documents containing ethical principles or guidelines for AI.Footnote 28 The study concluded that there is relative unanimity globally on at least five principle-based approaches of ethical character: (1) transparency, (2) justice and fairness, (3) non-harmful use, (4) responsibility, and (5) integrity/data protection. At the same time, the study establishes that there are considerable differences in how these principles are interpreted; why they are considered important; what issue, domain or actors they relate to; and how they should be implemented. The single most common principle is “transparency”—a particularly multifaceted concept it seems.Footnote 29

Meanwhile, the ethics researcher Thilo Hagendorff considers that the weak point of the Ethics Guidelines is that AI ethics—like ethics in general—lack mechanisms for creating compliance or for implementing their own normative claims.Footnote 30 According to Hagendorff, this is also the reason why ethics is so appealing to many companies and institutions. When companies and research institutes formulate their own ethics guidelines, repeatedly introduce ethical considerations, or adopt ethically motivated own undertakings, Hagendorff argues that this counteracts the introduction of genuinely binding legal frameworks. He thus places great emphasis on the avoidance of regulation specifically as a main aim of the AI industry’s ethics guidelines. Mark Coeckelbergh, professor of media and technology philosophy, who is also a member of the High-Level Expert Group, expresses similar risks: “that ethics are used as a fig leaf that helps to ensure acceptability of the technology and economic gain but has no significant consequences for the development and use of the technologies.”Footnote 31 Even if this reminder has merits, and the risk is real—it is indubitably an incentive for many companies to avoid tougher regulation by pointing to “self-regulation” and the development of internal policies with weak actual implementation—there may yet be other reasons for ethics as a tool for governance to have been emphasized so heavily within AI development. Even though self-regulation is surely used as an argument for avoiding the intervention of concrete legislation, the question is still whether the rapid growth of the AI field in particular does not play just as important a role in the conclusion that this particular field has required a softer approach while waiting for critical research to catch up and offer a stable foundation for potent regulation. The question is, however, what codification of AI ethics would involve, and which parts of it would be best suited for legislation.

3.1 Ethics Guidelines for Trustworthy AI

In April 2018, the EU adopted a strategy for AI, and appointed the High-Level Expert Group with its 52 members, to provide advice on both investments and ethical-governance issues in relation to AI in Europe. In December 2018, the Commission presented a co-ordinated plan—“Made in Europe”—which had been developed with the Member States to promote the development and use of AI in Europe. For example, the Commission expressed an intention that all Member States should have their own strategies in place by the middle of 2019, which did not completely materialize. The expert group was appointed via an open call and consists of a fairly mixed group of researchers and university representatives (within areas such as robotics, computer science, and philosophy), as well as representatives of industry (such as Zalando, Bosch, and Google), and civil-society organizations (such as Access Now,Footnote 32 ANEC,Footnote 33 and BEUCFootnote 34 ). The composition has not avoided criticism, however. For example, in May 2019, Yochai Benkler, a professor at Harvard Law School—perhaps most famous for his optimistic writings on collaborative economies, focusing on phenomena such as Wikipedia, Creative Commons, and open source code—expressed a fear that representatives of industry were allowed too much control over regulatory issues governing AI.Footnote 35 Benkler drew parallels between the EU Commission’s expert group, Google’s failed committee for AI ethics issues, and Facebook’s investment in a German AI and ethics research centre. Similarly, technology and law researcher Michael Veale criticizes the High-Level Expert Group—focusing on the set of policy and investment recommendationsFootnote 36 that was published after the Ethics Guidelines—for failing to address questions of power, infrastructure, as well as organizational factors (including business models) in contemporary data-driven markets.Footnote 37 When the Ethics Guidelines were published, they were also criticized by members of the expert group itself. Thomas Metzinger, a philosopher at the Johannes Gutenberg University Mainz, critically described the process as “ethics washing” in an opinion piece in which he described how the drafts produced on prohibitions against certain areas of use, such as autonomous weapons systems, had been toned down by representatives of industry and allies of these, to land in softer and more permissive wordings.Footnote 38

The Ethics Guidelines have had a clear impact on the subsequent White Paper on AI from the EU Commission (see below) but it still remains to be seen what sort of importance and impact all of these sources will have on European AI development. The Ethics Guidelines point out that trustworthy AI has three components that should be in place throughout the entire life-cycle of AI:

  1. a. it should be legal and comply with all applicable laws and regulations;

  2. b. it should be ethical and safeguard compliance with ethical principles and values; and

  3. c. it should be robust, from both a technical and a societal viewpoint, as AI systems can cause unintentional harm, despite good intentions.

The guidelines focus on ethical issues (b) and robustness (c), but leave legal issues (a) outside the explicit guidelines. It does this despite the fact that issues that are fairly well anchored in law, such as responsibility, anti-discrimination, and—not least—data protection, still fall within the framework for ethics. Just as the expert group established, many parts of AI development and use in Europe are already covered by existing legislation. These include the Charter of Fundamental Rights, the General Data Protection Regulation (GDPR), the Product Liability Directive, directives against discrimination, consumer-protection legislation, etc. Even though ethical and robust AI is to some extent often already reflected in existing laws, its full implementation may reach beyond existing legal obligations.

The expert group provides four ethical principles constituting the “foundation” of trustworthy AI: (1) Respect for human autonomy; (2) Prevention of harm; (3) Fairness; and (4) Explicability. However, for the realization of trustworthy AI, they address seven main prerequisites, which, they argue, must be evaluated and managed continuously during the entire life-cycle of the AI system:

  1. 1. Human agency and oversight

  2. 2. Technical robustness and safety

  3. 3. Privacy and data governance

  4. 4. Transparency

  5. 5. Diversity, nondiscrimination, and fairness

  6. 6. Societal and environmental wellbeing

  7. 7. Accountability.

As mentioned, although the guidelines emphasize that they focus on ethics and robustness, and not on issues of legality, it is interesting to note that both anti-discrimination (5) and protection of privacy (3) are developed as two of the seven central ethical prerequisites for the implementation of trustworthy AI. In relation to the investment and policy recommendations also published by the expert group, it recommends features such as a risk-based approach that is both proportional and effective in guaranteeing that AI is legal, ethical, and robust in its adaptation to fundamental rights.Footnote 39 Interestingly, the expert group calls for comprehensive mapping of relevant EU regulations to be carried out, in order to assess the extent to which the various regulations are still fulfilling their purposes in an AI-driven world. They highlight that new legal measures and control mechanisms may be needed to safeguard adequate protection against negative effects, and to enable correct supervision and implementation.

The Ethics Guidelines argue for the need for processes to be transparent in the sense that the capacities and purpose of AI systems should be “openly communicated, and decisions—to the extent possible—explainable to those directly and indirectly affected.”Footnote 40 A key reason is to be building and maintaining users’ trust. In the literature relating to ethics guidelines targeted at AI, it has been argued that transparency is not an ethics principle in itself, but rather a “pro-ethical condition”Footnote 41 for enabling or impairing other ethical practices or principles. As argued in a study on the socio-legal relevance of AI, there are several contradictory interests that can be linked to the issue of transparency.Footnote 42 Consequently, there are reasons other than pure technical complexity why certain approaches may be of a “black box” nature, not least the corporate interests of keeping commercial secrets and holding intellectual property rights.Footnote 43 Furthermore, the Ethics Guidelines contain an assessment list for practical use by companies. During the second half of 2019, over 350 organizations have tested this assessment list and sent feedback. The High-Level Group revised its guidelines in light of this feedback and presented their final Assessment List for Trustworthy Artificial Intelligence in July 2020.

3.2 The White Paper on AI

As mentioned, Commission President Ursula von der Leyen announced in her political GuidelinesFootnote 44 a co-ordinated European approach on the human and ethical implications of AI as well as a reflection on the better use of big data for innovation. The White Paper on AI from February 2020 could be seen in light of this commitment. In the White Paper, it is expressed that the Commission supports a regulatory and investment-oriented approach with what it calls a “twin objective of promoting the uptake of AI and of addressing the risks associated with certain uses of this new technology” and that the purpose of the White Paper is to set out policy options on how to achieve these objectives.Footnote 45 A key proposal in the White Paper is taking a risk-based, sector-specific approach to regulating AI, in which high-risk applications are distinguished from all other applications. First, a high-risk sector is where “significant risks can be expected,” which may initially include “healthcare; transport; energy and parts of the public sector.”Footnote 46 In addition, the application should be used in such a manner that “significant risks are likely to arise,” which means a cumulative approach. This proposal is an either/or approach on risk, and more nuanced alternatives have been proposed elsewhere, for example by the German Data Ethics Commission.Footnote 47

There is a clear value-base in the White Paper, with a particular focus on the concept of trust: “Given the major impact that AI can have on our society and the need to build trust, it is vital that European AI is grounded in our values and fundamental rights such as human dignity and privacy protection.”Footnote 48 An expressed aim of the EU’s policy framework is to mobilize resources to achieve an “ecosystem of excellence” along “the entire value chain.” The key elements of a future regulatory framework for AI in Europe is to create a “unique ecosystem of trust,” which is described as a policy objective in itself. The Commission’s hope is that a clear European regulatory framework would “build trust among consumers and businesses in AI, and therefore speed up the uptake of the technology.”Footnote 49

The White Paper makes a clear address to the human-centric approach based on the Communication on Building Trust in Human-Centric AI, which is also a central part of the Ethics Guidelines discussed above. The White Paper states that the Commission will take into account the input obtained during the piloting phase of the Ethics Guidelines prepared by the High-Level Expert Group on AI. Interestingly enough, the Commission concludes that those regarding transparency, traceability, and human oversight are not specifically covered under current legislation in many economic sectors. The lack of transparency, the Commission brings forward, makes it “difficult to identify and prove possible breaches of laws, including legal provisions that protect fundamental rights, attribute liability and meet the conditions to claim compensation.”Footnote 50

3.3 Asian Comparison

From an Asian socio-legal perspective, the Chinese and the Japanese developments on AI policy and governance are significant but will only briefly be addressed here. The core in China’s AI strategy can be found in the New Generation Artificial Intelligence Development Plan (AIDP), issued by China’s State Council in July 2017 and the Made in China 2025, released in May 2015.Footnote 51 For example, a goal expressed in the AIDP is to establish initial ethical norms, policies, and regulations related to AI development in China by 2020, to be further codified by 2025.Footnote 52 This includes participation in international standard setting as well as deepening international co-operation in AI laws and regulations. In 2019, a National Governance Committee for the New Generation Artificial Intelligence was established, which published a set of governance principles.Footnote 53 In May 2019, the so-called Beijing AI Principles, which is another set, were released by the Beijing Academy of Artificial Intelligence, depicting the core of its AI development as the realization of beneficial AI for humankind and nature. These Principles have been supported by various elite Chinese universities and companies including Baidu, Alibaba, and Tencent.Footnote 54

In Japan, an expert group at the Japanese Cabinet Office has elaborated on the Social Principles of Human-Centric AI (Social Principles), which was published in March 2019 after public comments were solicited. In a comparison between Japanese and European initiatives, a recent study concludes that common elements of both notions of governance include that AI should be applied in a manner that is “human-centric” and should be committed to the fundamental (constitutional) rights of individuals and democracy.Footnote 55 A particular difference is, however, according to Kozuka, that Japan’s Social Principles are more policy-oriented, while the European Ethics Guidelines have a rights-based approach. Interestingly, Kozuka—with references to Lawrence Lessig in the paper—concludes that “the role of the law as a mechanism of implementation will shrink and be substituted by the code as the use of AI becomes widespread.”Footnote 56 This notion is particularly meaningful in relation to automated policy implementation on large-scale digital platforms, shaping both human and institutional behaviour.Footnote 57


This article has put forward the difficulty of defining AI as one of the regulatory challenges that follow from the implementation and development of AI. While the historically visionary and contemporary heterogeneous AI field arguably provides favourable conditions for research and development, the conceptual fuzziness creates a challenge for regulation and governance. It is perhaps the data dependency of today’s machine learning—much critical and recent research shows—in combination with a complexity that creates a lack of explainability that stresses the risks of resulting in societal imbalances not only being reproduced, but also reinforced, at the same time as they evade detection. Furthermore, the article provides an account on the recent boom in ethics guidelines as a tool for governance in the field of AI, but with particular focus on the EU. Finally, three main concluding statements from the perspective of law and society can be made.

4.1 The Temporality Issue of Technology and Law

History teaches us that regulatory balancing is difficult, especially in times of rapid technological change in society. At the same time, legal scholars such as Karl Renner, who analyzed property laws of Western Europe’s industrialization, also teach that law can be an extremely dynamic and adaptive organism. It is conceivable that central parts of the Ethical Guidelines may be formalized with support for European and national legislation and regulation, focusing on the importance of (“an ecosystem of”) trust. The interpretation of existing legislation in light of functionalities, possibilities, and challenges of AI systems is also a matter of serious concern associated with major challenges. Even though the “legal lag” is more complex than it may seem,Footnote 58 the speed of change, in particular, is still repeatedly a difficult challenge in relation to the inertia of traditional regulation.Footnote 59 Legislative processes aimed at learning technologies with increasing agencyFootnote 60 require reflection, critical studies, and more knowledge in order to be able to find the desirable societal balances between various interests. Especially transparency, traceability, and human oversight are not clearly covered—or understood—under current legislation in many economic sectors. The temporal aspect of the difference between new technologies and well-fitted regulation, in combination with the many-headed balancing of interests, is very likely a significant contributor to why governance in the area is heavily characterized by ethics guidelines at the moment.

4.2 From Principles to Process

The White Paper signifies an ongoing process of evaluation towards where the principled take on AI governance expressed by a multitude of ethics guidelines can find a balanced formalization in law. This is also signified by the work conducted by the High-Level Group, as it was revisiting and assessing the Ethics Guidelines jointly with companies and stakeholders during 2019 and 2020. The Member States’ supervisory authorities and agencies could be addressed specifically here too, in the sense that they will very likely be the ones to carry out relevant parts of any regulatory approach on AI focusing what the High-Level Expert Group has expressed as a need for “explicability”—that is, auditability and explainability. This particular aspect of transparency stresses the need for both methodological developmentFootnote 61 and likely closer collaboration between relevant supervisory authorities than what is often the case at a Member-State level.

The great range of ethics guidelines still displays a core of principal values, but—being ethical guidelines—are relatively poor in procedural arrangements compared to law. This can be understood as an expression of how quickly the transition in society towards a data-dependent and applied AI has been, where the principle stage is essential. The subsequent procedural stage is necessary, however, both to strengthen the chances of implementing the principal values as well as to formalize in legislation, assessment methodologies, and standardization. If one can regard the growth in ethics guidelines as an expression of the rapidity of the development of AI methods, the procedural stage is an expected second stage. However, if one regards the ethics guidelines as industry’s reluctance to accept regulation of its activities, as a soft version of legislation that is intended to be toothless, then the procedural stage will meet considerable resistance. Perhaps the lack of expressing power structures of contemporary data-driven platform markets—emphasized by critics—is a sign of the regulatory struggles to come in the leap from principles to process.

4.3 The Multidisciplinary AI Research Need

Contemporary data-dependent AI should not be developed in technological isolation without continuous assessments from the perspective of ethics, cultures, and law.Footnote 62 Furthermore, given the applied status of AI, it is imperative that humanistic and social scientific AI research is stimulated jointly with technological research and development. Given aspects of learning in data-dependent AI, there is an interaction at hand in which human values and societal structures constitute the training data. This means that social values and informal norms may be reproduced or even amplified—sometimes with terrible outcomes. From an empirical approach, one could conclude that it is often human values and skewed social structures that lead to automated failures. In applied AI, learning simply arises not only from good and balanced examples, but also from the less proud sides of humanity: racism, xenophobia, gender inequalities, and institutionalized injustices.Footnote 63 Challenges here will thus be to sort normatively among the underlying data, or alternatively to take a normative view on the importance of the automation and scalability of self-learning technologies so that the reproductive and amplifying tendencies become better and more balanced than their underlying material. There is, therefore, a multidisciplinary need for research in this field that requires collaboration between the mathematically informed computer-scientific disciplines that have deep insights into how AI systems are built and operate, and the humanities and social science-oriented disciplines that can theorize and understand their interaction with cultures, norms, values, attitudes, or the meanings and consequences for power relations, states, and regulation.

In conclusion, the AI-development issue has come to take a value-based and ethics-focused development within the European administration with a focus on trustworthiness and human-centric design. It is an answer to the question of how to look at AI and its qualities, which here is found to be commendable: the precision of self-learning and autonomous technologies needs to be assessed in its interaction with the values of society. It is a normative definition with bearing on future development lines—a good AI is a socially entrenched and trustworthy one.


The research for this paper was enabled by the Wallenberg AI, Autonomous Systems and Software Program—Humanities and Society (WASP-HS), within the AI Transparency and Consumer Trust project; the Swedish Research Council (VR; grant no. 2019–00198) in the AIR Lund (Artificially Intelligent use of Registers at Lund University) research environment; and Vinnova (2019–00848) in the Framework for Sustainable AI project.


Senior lecturer and Associate Professor in Technology and Social Change at Lund University, Sweden, Department of Technology and Society. Correspondence to Stefan Larsson, Technology and Society, LTH, Lund University, Box 118, 221 00 Lund, Sweden. E-mail address:

1. See Renner (Reference Renner2010 [1949]).

2. See Larsson (Reference Larsson2014).

3. See van Dijck, Poell, & de Waal (Reference van Dijck, Poell and Waal2018); Poell, Nieborg, & van Dijck (Reference Poell, Nieborg and van Dijck2019).

4. Mejias & Couldry (Reference Mejias and Couldry2019).

5. Larsson (Reference Larsson2018).

6. Ehrlich (Reference Ehrlich1913).

7. Pound (Reference Pound1910).

8. For an insightful analysis of Ehrlich’s and Renner’s theoretical contributions to the sociology of law, see Nelken (Reference Nelken1984). For discussion on the reuse and reinterpretation of socio-legal classic theory, see Nelken (Reference Nelken2007) and, for a particular digital context, see Larsson (Reference Larsson2013).

9. For an extensive account of this trichotomy, see Larsson (Reference Larsson2017).

10. EU Commission (2018).

11. The High-Level Expert Group (2019b).

12. See Jobin, Ienca, & Vayena (Reference Jobin, Ienca and Vayena2019); Hagendorff (Reference Hagendorff2020); Mittelstadt (Reference Mittelstadt2019).

13. von der Leyen (Reference von der Leyen2019), p. 13.

14. EU Commission (2020).

15. Mandel (Reference Mandel2009), p. 75 (analyzing advancements in biotechnology, nanotechnology, and synthetic biology from a regulatory perspective).

16. Larsson (Reference Larsson2019).

17. Fast & Horvitz (Reference Fast and Horvitz2017).

18. Monett, Lewis, & Thórisson (Reference Monett, Lewis and Thórisson2020).

19. Martinez (Reference Martinez2019).

20. Gasser & Almeida (Reference Gasser and Almeida2017), p. 59.

21. The High-Level Expert Group (2019a).

22. See Stone et al. (Reference Stone2016).

24. Dignum (Reference Dignum2019), p. 3.

25. Also emphasized in Larsson & Heintz (Reference Larsson and Heintz2020).

26. The High-Level Expert Group, supra note 21, p. 6.

27. For definitional and metaphoric aspects of new technologies and their regulatory implications, see Larsson (Reference Larsson2017).

28. Jobin, Ienca, & Vayena, supra note 12.

29. For a conceptual analysis of transparency in AI, see Larsson & Heintz, supra note 25, and a socio-legal commentary in Larsson, supra note 16, proposing seven aspects of socio-legal relevance for transparency in applied AI.

30. Hagendorff, supra note 12.

31. Coeckelbergh (Reference Coeckelbergh2019), p. 33.

32. An international nonprofit, human-rights, public-policy, and advocacy group dedicated to an open and free Internet.

33. The European Association for the Co-ordination of Consumer Representation in Standardisation.

34. The European Consumer Organisation, bringing together 45 European consumer organizations from 32 countries.

35. Benkler (Reference Benkler2019).

36. The High-Level Expert Group (2019c).

37. Veale (Reference Veale2020); see also Koulu (Reference Koulu2020) on the shortcomings of human control over automation and the need to broaden the discussion from the current focus on technology and ethics to discussions about societal structures and law.

38. Metzinger (Reference Metzinger2019).

39. The High-Level Expert Group, supra note 36. See also the Opinion of the German Data Ethics Commission (2019).

40. The High-Level Expert Group, supra note 11, p. 13.

41. Turilli & Floridi (Reference Turilli and Floridi2009).

42. Larsson, supra note 16; Larsson & Heintz, supra note 25.

43. See Pasquale (Reference Pasquale2015).

44. von der Leyen, supra note 13.

45. EU Commission, supra note 14, p. 1.

46. Footnote Ibid ., p. 17.

47. German Data Ethics Commission, supra note 39.

48. EU Commission, supra note 14, p. 2.

49. Footnote Ibid ., pp. 9–10.

50. Footnote Ibid ., p. 14.

51. For a comprehensive analysis of the implications and direction of the AIDP, see Roberts et al. (Reference Roberts2019).

52. On ongoing efforts to develop AI-governance theories and technologies from the perspective of China, see Wu, Huang, & Gong (Reference Wu, Huang and Gong2020).

53. National Governance Committee for the New Generation Artificial Intelligence (2019) Governance principles of the New Generation Artificial Intelligence—developing responsible AI.

54. Daly et al. (Reference Daly2019).

55. Kozuka (Reference Kozuka2019), p. 322.

56. Footnote Ibid ., p. 329.

57. See Katzenbach & Ulbricht (Reference Katzenbach and Ulbricht2019) on “algorithmic governance;” Larsson, supra note 8; van Dijck, Poell, & de Waal, supra note 3.

58. See Bennett Moses (Reference Bennett Moses2011).

59. Namely Abel (Reference Abel1982).

60. Hildebrandt (Reference Hildebrandt2015).

61. See Larsson, supra note 5, on the “algorithmic governance” of data-driven markets.

62. See Moses, supra note 58; Koulu, supra note 37; Larsson, supra note 16; Veale, supra note 37; Yeung & Lodge (Reference Yeung and Lodge2019).

63. Discussed by Larsson, supra note 16, in terms of a “mirror for social structures.”


Abel, Richard L. (1982) “Law as Lag: Inertia as a Social Theory of Law.” 80 Michigan Law Review 785809.CrossRefGoogle Scholar
AIDP (2017) New Generation Artificial Intelligence Development Plan (AIDP), Beijing: China’s State Council.Google Scholar
Beijing Academy of Artificial Intelligence (2019) “Beijing AI Principles,” (accessed 17 May 2020).Google Scholar
Benkler, Yochai (2019) “Don’t Let Industry Write the Rules for AI.” 569 Nature 161.CrossRefGoogle ScholarPubMed
Bennett Moses, Lyria (2011) “Agents of Change.” 20 Griffith Law Review 763–94.CrossRefGoogle Scholar
Coeckelbergh, Mark (2019) “Artificial Intelligence: Some Ethical Issues and Regulatory Challenges.Technology and Regulation 31–4.Google Scholar
Daly, Angela, et al. (2019) “Artificial Intelligence, Governance and Ethics: Global Perspectives,” The Chinese University of Hong Kong Faculty of Law Research Paper, No. 2019–15, Hong Kong: Chinese University of Hong Kong.Google Scholar
Dignum, Virginia (2019) Responsible Artificial Intelligence: How to Develop and Use AI in a Responsible Way, Cham: Springer International Publishing.CrossRefGoogle Scholar
Ehrlich, Eugen (1913) Grundlegung der Soziologie des Rechts, Berlin: Verlag von Duncker & Humblot.Google Scholar
EU Commission (2018) Communication from the Commission to the European Parliament, the European Council, the Council, the European Economic and Social Committee and the Committee of the Regions: Artificial Intelligence for Europe, COM(2018) 237 final, Brussels: European Commission.Google Scholar
EU Commission (2020) White Paper on Artificial Intelligence—A European Approach to Excellence and Trust, COM(2020) 65 final, Brussels: European Commission.Google Scholar
Fast, Ethan, & Horvitz, Eric (2017) “Long-Term Trends in the Public Perception of Artificial Intelligence” Presented at Thirty-First AAAI Conference on Artificial Intelligence, 4–10 February 2017.Google Scholar
Gasser, Urs, & Almeida, Virgilio A. F. (2017) “A Layered Model for AI Governance.” 21 IEEE Internet Computing 5862.CrossRefGoogle Scholar
German Data Ethics Commission (2019) “Opinion of the Data Ethics Commission,” (accessed 15 May 2020).Google Scholar
Hagendorff, Thilo (2020) “The Ethics of AI Ethics: An Evaluation of Guidelines.” 30 Minds and Machines 99120.CrossRefGoogle Scholar
High-Level Expert Group on Artificial Intelligence, The (2019a) A Definition of AI: Main Capabilities and Disciplines: Definition Developed for the Purpose of the AI HLEG’s Deliverables, Brussels: European Commission.Google Scholar
High-Level Expert Group on Artificial Intelligence, The (2019b) Ethics Guidelines for Trustworthy AI, Brussels: European Commission.Google Scholar
High-Level Expert Group on Artificial Intelligence, The (2019c) Policy and Investment Recommendations for Trustworthy Artificial Intelligence, Brussels: European Commission.Google Scholar
Hildebrandt, Mireille (2015) Smart Technologies and the End (s) of Law: Novel Entanglements of Law and Technology, Cheltenham: Edward Elgar Publishing.CrossRefGoogle Scholar
Jobin, Anna, Ienca, Marcello, & Vayena, Effy (2019) “The Global Landscape of AI Ethics Guidelines.” 1 Nature Machine Intelligence 389–99.CrossRefGoogle Scholar
Katzenbach, Christian, & Ulbricht, Lena (2019) “Algorithmic Governance.” 8 Internet Policy Review, (accessed 12 June 2020).CrossRefGoogle Scholar
Koulu, Riikka (2020) “Human Control over Automation: EU Policy and AI Ethics.” 12 European Journal of Legal Studies 946.Google Scholar
Kozuka, Souichirou (2019) “A Governance Framework for the Development and Use of Artificial Intelligence: Lessons from the Comparison of Japanese and European Initiatives.” 24 Uniform Law Review 315–29.CrossRefGoogle Scholar
Larsson, Stefan (2013) “Sociology of Law in a Digital Society—a Tweet from Global Bukowina.” 15 Societas/Communitas 281–95.Google Scholar
Larsson, Stefan (2014) “Karl Renner and (Intellectual) Property—How Cognitive Theory Can Enrich a Sociolegal Analysis of Contemporary Copyright.” 48 Law & Society Review 333.CrossRefGoogle Scholar
Larsson, Stefan (2017) Conceptions in the Code: How Metaphors Explain Legal Challenges in Digital Times, New York: Oxford University Press.CrossRefGoogle Scholar
Larsson, Stefan (2018) “Algorithmic Governance and the Need for Consumer Empowerment in Data-Driven Markets.” 7 Internet Policy Review 1–13.CrossRefGoogle Scholar
Larsson, Stefan (2019) “The Socio-Legal Relevance of Artificial Intelligence.” 103 Droit et Société 573–93.CrossRefGoogle Scholar
Larsson, Stefan, & Heintz, Fredrik (2020) “Transparency in Artificial Intelligence.” 9 Internet Policy Review 116.CrossRefGoogle Scholar
Legg, Shane, & Hutter, Marcus (2007) “A Collection of Definitions of Intelligence,” in Goertzel, B. & Wang, P., eds., Advances in Artificial General Intelligence: Concepts, Architectures and Algorithms, Proceedings of the AGI Workshop 2006 (Vol. 157), IOS Press, 1724.Google Scholar
Mandel, Gregory N. (2009) “Regulating Emerging Technologies.” 1 Law, Innovation and Technology 7592.CrossRefGoogle Scholar
Martinez, Rex (2019) “Artificial Intelligence: Distinguishing Between Types & Definitions.” 19 Nevada Law Journal 1015–42.Google Scholar
Mejias, Ulises A., & Couldry, Nick (2019) “Datafication.” 8 Internet Policy Review, (accessed 15 September 2020).CrossRefGoogle Scholar
Metzinger, Thomas (2019) “EU Guidelines. Ethics Washing Made in Europe,” (accessed 15 September 2020).Google Scholar
Mittelstadt, Brent (2019) “Principles Alone Cannot Guarantee Ethical AI.” 1 Nature Machine Intelligence 501–7.CrossRefGoogle Scholar
Monett, Dagmar, Lewis, Colin, & Thórisson, Kristinn R. (2020) “Introduction to the JAGI Special Issue ‘On Defining Artificial Intelligence’—Commentaries and Author’s Response.” 11 Journal of Artificial General Intelligence 1100.CrossRefGoogle Scholar
National Governance Committee for the New Generation Artificial Intelligence (2019) “Governance Principles of the New Generation Artificial Intelligence—Developing Responsible Artificial Intelligence,” (accessed 12 June 2020).Google Scholar
Nelken, David (1984) “Law in Action or Living Law: Back to the Beginning in Sociology of Law.” 4 Legal Studies 157.CrossRefGoogle Scholar
Nelken, David (2007) “An Email from Global Bukowina.” 3 International Journal of Law in Context 189202.CrossRefGoogle Scholar
Pasquale, Franck (2015) The Black Box Society: The Secret Algorithms that Control Money and Information, Harvard: Harvard University Press.CrossRefGoogle Scholar
Poell, Thomas, Nieborg, David, & van Dijck, José (2019) “Platformisation.” 8 Internet Policy Review, (accessed 12 June 2020).CrossRefGoogle Scholar
Pound, Roscoe (1910) “Law in Books and Law in Action.” 44 American Law Review 1236.Google Scholar
Renner, Karl (2010 [1949]) The Institutions of Private Law and Their Social Functions, New Brunswick & London: Transaction Publishers.Google Scholar
Roberts, Huw, et al. (2019) “The Chinese Approach to Artificial Intelligence: An Analysis of Policy and Regulation,” (accessed 15 September 2020).CrossRefGoogle Scholar
Stone, Peter, et al. (2016) “Artificial Intelligence and Life in 2030,” Report of the 2015–2016 Study Panel, Stanford University.Google Scholar
Turilli, Matteo, & Floridi, Luciano (2009) “The Ethics of Information Transparency.” 11 Ethics and Information Technology 105–12.CrossRefGoogle Scholar
van Dijck, José, Poell, Thomas, & Waal, Martijn d. (2018) The Platform Society: Public Values in a Connective World, Oxford: Oxford University Press.CrossRefGoogle Scholar
Veale, Michael (2020) “A Critical Take on the Policy Recommendations of the EU High-Level Expert Group on Artificial Intelligence.European Journal of Risk Regulation 18.CrossRefGoogle Scholar
von der Leyen, Ursula (2019) A Union that Strives for More. My agenda for Europe, Political Guidelines for the Next European Commission 2019–2024, Brussels: European Commission.Google Scholar
Wu, Wwenjun, Huang, Tiejun, & Gong, Ke (2020). “Ethical Principles and Governance Technology Development of AI in China.” 6 Engineering 302–9.CrossRefGoogle Scholar
Yeung, Karen, & Lodge, Martin, eds. (2019) Algorithmic Regulation, Oxford: Oxford University Press.CrossRefGoogle Scholar
You have Access
Open access
Cited by

Send article to Kindle

To send this article to your Kindle, first ensure is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about sending to your Kindle. Find out more about sending to your Kindle.

Note you can select to send to either the or variations. ‘’ emails are free but can only be sent to your device when it is connected to wi-fi. ‘’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

On the Governance of Artificial Intelligence through Ethics Guidelines
Available formats

Send article to Dropbox

To send this article to your Dropbox account, please select one or more formats and confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your <service> account. Find out more about sending content to Dropbox.

On the Governance of Artificial Intelligence through Ethics Guidelines
Available formats

Send article to Google Drive

To send this article to your Google Drive account, please select one or more formats and confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your <service> account. Find out more about sending content to Google Drive.

On the Governance of Artificial Intelligence through Ethics Guidelines
Available formats

Reply to: Submit a response

Please enter your response.

Your details

Please enter a valid email address.

Conflicting interests

Do you have any conflicting interests? *