Skip to main content Accessibility help
×
Hostname: page-component-76fb5796d-5g6vh Total loading time: 0 Render date: 2024-04-25T07:26:29.082Z Has data issue: false hasContentIssue false

9 - AI, Governance and Ethics

Global Perspectives

from Part II - Regulation and Policy

Published online by Cambridge University Press:  01 November 2021

Hans-W. Micklitz
Affiliation:
European University Institute, Florence
Oreste Pollicino
Affiliation:
Bocconi University
Amnon Reichman
Affiliation:
University of California, Berkeley
Andrea Simoncini
Affiliation:
University of Florence
Giovanni Sartor
Affiliation:
European University Institute, Florence
Giovanni De Gregorio
Affiliation:
University of Oxford

Summary

This chapter presents an overview of how government, corporations and other actors are approaching the topic of Artificial Intelligence (AI) governance and ethics across China, Europe, India and the United States of America. Recent policy documents and other initiatives from these regions, both from public sector agencies and private companies such as Microsoft are documented and a brief analysis is offered.

Type
Chapter
Information
Publisher: Cambridge University Press
Print publication year: 2021
Creative Commons
Creative Common License - CCCreative Common License - BYCreative Common License - NCCreative Common License - ND
This content is Open Access and distributed under the terms of the Creative Commons Attribution licence CC-BY-NC-ND 4.0 https://creativecommons.org/cclicenses/

9.1 Introduction

Artificial intelligence (AI) is a technology which is increasingly being utilised in society and the economy worldwide, but there is much disquiet over problematic and dangerous implementations of AI, or indeed even AI itself deciding to do dangerous and problematic actions. These developments have led to concerns about whether and how AI systems currently adhere to and will adhere to ethical standards, stimulating a global and multistakeholder conversation on AI ethics and the production of AI governance initiatives. Such developments form the basis for this chapter, where we give an insight into what is happening in Australia, China, the European Union, India and the United States.

We commence with some background to the AI ethics and regulation debates, before proceedings to give an overview of what is happening in different countries and regions, namely Australia, China, the European Union (including national level activities in Germany), India and the United States. We provide an analysis of these country profiles, with particular emphasis on the relationship between ethics and law in each location.

Overall we find that AI governance and ethics initiatives are most developed in China and the European Union, but the United States has been catching up in the last eighteen months. India remains an outlier among these ‘large jurisdictions’ by not articulating a set of AI ethics principles, and Australia hints at the challenges a smaller player may face in forging its own path. The focus of these initiatives is beginning to turn to producing legally enforceable outcomes, rather than just purely high-level, usually voluntary, principles. However, legal enforceability also requires practical operationalising of norms for AI research and development, and may not always produce desirable outcomes.

9.2 AI, Regulation and Ethics

AI has been deployed in a range of contexts and social domains, with mixed outcomes, including in finance, education, employment, marketing and policing.Footnote 1 At this relatively early stage in AI’s development and implementation, the issue has arisen of AI adhering to certain ethical principles.Footnote 2 The ability of existing laws to govern AI has emerged as another key question as to how future AI will be developed, deployed and implemented.Footnote 3 While originally confined to theoretical, technical and academic debates, the issue of governing AI has recently entered the mainstream with both governments and private companies from major geopolitical powers including the United States, China and the European Union formulating statements and policies regarding AI and ethics.Footnote 4

A host of questions are raised by these developments. For one, what are the ethical standards to which AI should adhere? The transnational nature of digitised technologies, the key role of private corporations in AI development and implementation and the globalised economy give rise to questions about which jurisdictions and actors will decide on these standards. Will we end up with a ‘might is right’ approach where it is these large geopolitical players which set the agenda for AI regulation and ethics for the whole world? Further questions arise regarding the enforceability of ethics statements regarding AI, both in terms of whether they reflect existing fundamental legal principles and are legally enforceable in specific jurisdictions, and the extent to which the principles can be operationalised and integrated into AI systems and applications in practice.

Ethics itself is seen as a reflection theory of morality or as the theory of the good life. A distinction can be made between fundamental ethics, which is concerned with abstract moral principles, and applied ethics.Footnote 5 The latter also includes ethics of technology, which contains in turn AI ethics as a subcategory. Roughly speaking, AI ethics serves for the self-reflection of computer and engineering sciences, which are engaged in the research and development of AI or machine learning. In this context, dynamics such as individual technology development projects, or the development of new technologies as a whole, can be analysed. Likewise, causal mechanisms and functions of certain technologies can be investigated using a more static analysis.Footnote 6 Typical topics are self-driving cars, political manipulation by AI applications, autonomous weapon systems, facial recognition, algorithmic discrimination, conversational bots, social sorting by ranking algorithms and many more.Footnote 7 Key demands of AI ethics relate to aspects such as research goals and purposes, research funding, the linkage between science and politics, the security of AI systems, the responsibility for the development and use of AI technologies, the inscription of values in technical artefacts, the orientation of the technology sector towards the common good and much more.Footnote 8

In this chapter, we give an overview of major countries and regions’ approaches to AI, governance and ethics. We do not claim to present an exhaustive account of approaches to this issue internationally, but we do aim to give a snapshot of how some countries and regions, especially large ones like China, the European Union, India and the United States, are (or are not) addressing the topic. We also include some initiatives at the national level, of EU Member State Germany and Australia, all of which can be considered as smaller (geo)political and legal entities. In examining these initiatives, we look at one particular aspect, namely the extent to which these ethics/governance initiatives from governments are legally enforceable. This is an important question given concerns about ‘ethics washing’: that ethics and governance initiatives without the binding force of law are mere ‘window dressing’ while unethical uses of AI by governments and corporations continue.Footnote 9

These activities, especially of the ‘large jurisdictions’, are important given the lack of international law explicitly dealing with AI. There has been some activity from international organisations such as the OECD’s Principles on AI, which form the basis for the G20’s non-binding guiding principles for using AI.Footnote 10 There are various activities that the United Nations (UN) and its constituent bodies are undertaking which relate to AI.Footnote 11 The most significant activities are occurring at UNESCO, which has commenced a two-year process ‘to elaborate the first global standard-setting instrument on ethics of artificial intelligence’, which it aims to produce by late 2021.Footnote 12 However, prospects of success for such initiatives, especially if they are legally enforceable, may be dampened by the fact that an attempt in 2018 to open formal negotiations to reform the UN Convention on Certain Conventional Weapons to govern or prohibit fully autonomous lethal weapons was blocked by the United States and Russia, among others.Footnote 13 In June 2020, various states – including Australia, the European Union, India, the United Kingdom and the United States, but excluding China and Russia – formed the Global Partnership on Artificial Intelligence (GPAI), an ‘international and multistakeholder initiative to guide the responsible development and use of AI, grounded in human rights, inclusion, diversity, innovation, and economic growth’.Footnote 14 GPAI’s activities, and their convergence or divergence with those in multilateral fora such as UN agencies, remain to be seen.

In the following sections, we give overviews of the situation in each country/region and the extent to which legally binding measures have been adopted. We have specifically considered government initiatives which frame and situate themselves in the realm of ‘AI governance’ or ‘AI ethics’. We acknowledge that other initiatives, from corporations, NGOs and other organisations on AI ethics and governance, and other initiatives from different stakeholders on topics relevant to ‘big data’ and the ‘Internet of Things’, may also be relevant to AI governance and ethics. Further work should be conducted on these and on ‘connecting the dots’ between some predecessor digital technology governance initiatives and the current drive for AI ethics and governance.

9.3 Country/Region Profiles

Australia

While Australia occupies a unique position as the only western liberal democracy without comprehensive enforceable human rights protections,Footnote 15 there has been increasing attention on the human rights impacts of technology and the development of an AI ethics framework.

The Australian AI Ethical Framework was initially proposed by Data 61 and CSIRO in the Australian Commonwealth (i.e., federal) Department of Industry, Innovation and Science in 2019.Footnote 16 A discussion paper from this initiative commenced with an examination of existing ethical frameworks, principles and guidelines and included a selection of largely international or US-based case studies, which overshadowed the unique Australian socio-political-historical context. It set out eight core principles to form an ethical framework for AI. The proposed framework was accompanied by a ‘toolkit’ of strategies, as attempts to operationalise the high-level ethical principles in practice, including impact and risk assessments, best practice guidelines and industry standards. Following a public consultation process, which involved refinement of the eight proposed principles (for example, merging two and adding a new one), the Australian AI Ethics Principles are finalised as: human, social and environmental wellbeing; human-centred values; fairness; privacy protection and security; reliability and safety; transparency and explainability; contestability; and accountability.Footnote 17 The Principles are entirely voluntary and have no legally binding effect. The Australian government released some guidance for the Principles’ application, but this is scant compared to other efforts in, for example, Germany (as discussed later).Footnote 18

One further significant development is the Human Rights and Technology project that is being led by the Australian Human Rights Commissioner Edward Santow, explicitly aimed at advancing a human rights–based approach to regulating AI.Footnote 19 The Australian Human Rights Commission (AHRC) has made a series of proposals, including: the development of an Australian National Strategy on new and emerging technologies; that the Australian government introduce laws that require an individual to be informed where AI is used and to ensure the explainability of AI-informed decision-making; and that where an AI-informed decision-making system does not produce reasonable explanations, it should not be deployed where decisions can infringe human rights. The AHRC has also called for a legal moratorium on the use of facial recognition technology until an appropriate legal framework has been implemented. There is the potential for these proposals to become legally binding, subject to normal parliamentary processes and the passage of new or amended legislation.

China

China has been very active in generating state-supported or state-led AI governance and ethics initiatives along with its world-leading AI industry. Until the 2019 Trump Executive Order stimulating AI governance and ethics strategy development in the United States, China combined both this very strong AI industry with governance strategising, contrasting with its main competitor.

In 2017, China’s State Council issued the New-Generation AI Development Plan (AIDP), which advanced China’s objective of high investment in the AI sector in the coming years, with the aim of becoming the world leader in AI innovation.Footnote 20 An interim goal, by 2025, is to formulate new laws and regulations, and ethical norms and policies related to AI development in China. This includes participation in international standard setting, or even ‘taking the lead’ in such activities as well as ‘deepen[ing] international cooperation in AI laws and regulations’.Footnote 21 The plan introduced China’s attitude towards AI ethical, legal and social issues (ELSI), and prescribed that AI regulations should facilitate the ‘healthy development of AI’.Footnote 22 The plan also mentioned AI legal issues including civil and criminal liability, privacy and cybersecurity. Its various ethical proposals include a joint investigation into AI behavioural science and ethics, an ethical multi-level adjudicative structure and an ethical framework for human-computer collaboration.

To support the implementation of ‘Three-Year Action Plan to Promote the Development of a New Generation of Artificial Intelligence Industry (2018–2020)’, the 2018 AI Standardization Forum released its first White Paper on AI Standardization.Footnote 23 It signalled that China would set up the National AI Standardization Group and the Expert Advisory Panel. Public agencies, enterprises and academics appear to be closely linked to the group, and tech giants like Tencent, JD, Meituan, iQiyi, Huawei and Siemens China are included in the Advisory Panel on AI ethics. The 2019 report on AI risks then took the implications of algorithms into serious consideration by building upon some declarations and principles proposed by international, national and technical communities and organisations concerning algorithmic regulation.Footnote 24 The report also proposes two ethical guidelines for AI. The first is the principle of human interest, which means that AI should have the ultimate goal of securing human welfare; the second is the principle of liability, which implies that there should be an explicit regime for accountability in both the development and deployment of AI-related technologies.Footnote 25 In a broader sense, liability ought to be considered as an overarching principle that can guarantee transparency as well as consistency of rights and responsibilities.Footnote 26

There have been further initiatives on AI ethics and governance. In May 2019, the Beijing AI Principles were released by the Beijing Academy of Artificial Intelligence, which depicted the core of its AI development as ‘the realization of beneficial AI for humankind and nature’.Footnote 27 The Principles have been supported by various elite Chinese universities and companies including Baidu, Alibaba and Tencent. Another group comprising top Chinese universities and companies and led by the Ministry of Industry and Information Technology’s (MIIT’s) China Academy of Information and Communications Technology, the Artificial Intelligence Industry Alliance (AIIA) released its Joint Pledge on Self Discipline in the Artificial Intelligence Industry, also in May 2019. While the wording is fairly generic when compared to other ethics and governance statements, Webster points to the language of ‘secure/safe and controllable’ and ‘self-discipline’ as ‘mesh[ing] with broader trends in Chinese digital governance’.Footnote 28

An expert group established by the Chinese Government Ministry of Science and Technology released its eight Governance Principles for the New Generation Artificial Intelligence: Developing Responsible Artificial Intelligence in June 2019.Footnote 29 Again, international cooperation is emphasised in the principles, along with ‘full respect’ for AI development in other countries. A possibly novel inclusion is the idea of ‘agile governance’, that problems arising from AI can be addressed and resolved ‘in a timely manner’. This principle reflects the rapidity of AI development and the difficulty in governing it through conventional procedures, for example through legislation which can take a long time to pass in China, by which time the technology may have already changed. While ‘agile policy-making’ is a term also used by the European Union High-Level Expert Panel, it is used in relation to the regulatory sandbox approach, as opposed to resolving problems, and is also not included in the Panel’s Guidelines as a principle.

While, as mentioned previously, Chinese tech corporations have been involved in AI ethics and governance initiatives both domestically in China and internationally in the form of the Partnership on AI,Footnote 30 they also appear to be internally considering ethics in their AI activities. Examples include Toutiao’s Technology Strategy Committee, which partially acts as an internal ethics board.Footnote 31 Tencent also has its AI for Social Good programme and ARCC (Available, Reliance, Comprehensible, Controllable) Principles but does not appear to have an internal ethics board to review AI developments.Footnote 32

Although the principles set by these initiatives initially lacked legal enforcement/enforceability and policy implications, China highlighted in the 2017 AIDP three AI-related applied focuses, namely international competition, economic growth and social governance,Footnote 33 which have gradually resulted in ethical and then legal debates.

First, China’s agile governance model is transforming AI ethics interpreted in industrial standards into the agenda of national and provincial legislatures. After the birth of a gene-edited-baby caused the establishment of the National Science and Technology Ethics Committee in late 2019, the Ethics Working Group of the Chinese Association of Artificial Intelligence is planning to establish and formulate various ethical regulations for AI in different industries, such as self-driving, data ethics, smart medicine, intelligent manufacturing and elders-aiding robot specifications.Footnote 34 National and local legislation and regulation have been introduced or are being experimented upon to ensure AI security in relation to drones, self-driving cars and fintech (e.g., robot advisors).Footnote 35

Second, AI ethics has had a real presence in social issues and judicial cases involving human-machine interaction and liability. One instance has involved whether AI can be recognised as the creator of works for copyright purposes, where two courts in 2019 came to opposing decisions on that point.Footnote 36 Another has involved regulatory activity on the part of the Cyberspace Administration of China to address deepfakes. It has issued a draft policy on Data Security Management Measures which proposes requiring as part of their platform liability service providers that use AI to automatically synthesise ‘news, blog posts, forum posts, comments etc’, to clearly signal such information as ‘synthesized’ without any commercial purposes or harms to others’ pre-existing interests.Footnote 37

European Union

Perceived to be lacking the same level of industrial AI strength as China and the United States, the European Union has been positioning itself as a frontrunner in the global debate on AI governance and ethics from legal and policy perspectives. The General Data Protection Regulation (GDPR), a major piece of relevant legislation, came into effect in 2018, and has a scope (Art 3) which extends to some organisations outside of the European Union in certain circumstances,Footnote 38 and provisions on the Right to Object (Article 21) and Automated Individual Decision-Making Including Profiling (Article 22). There is significant discussion as to precisely what these provisions entail in practice regarding algorithmic decision-making, automation and profiling and whether they are adequate to address the concerns that arise from such processes.Footnote 39

Among other prominent developments in the European Union is the European Parliament Resolution on Civil Law Rules on Robotics from February 2017.Footnote 40 While the Resolution is not binding, it expresses the Parliament’s opinion and requests the European Commission to carry out further work on the topic. In particular, the Resolution ‘consider[ed] that the existing Union legal framework should be updated and complemented, where appropriate, by guiding ethical principles in line with the complexity of robotics and its many social, medical and bioethical implications’.Footnote 41

In March 2018, the European Commission issued a Communication on Artificial Intelligence for Europe, in which the Commission set out ‘a European initiative on AI’ with three main aims: of boosting the European Union’s technological and industrial capacity, and AI uptake; of preparing for socio-economic changes brought about by AI (with a focus on labour, social security and education); and of ensuring ‘an appropriate ethical and legal framework, based on the Union’s values and in line with the Charter of Fundamental Rights of the European Union’.Footnote 42

The European Union High-Level Expert Group on Artificial Intelligence, a multistakeholder group of fifty-two experts from academia, civil society and industry produced the Ethics Guidelines for Trustworthy AI in April 2019, including seven key, but non-exhaustive, requirements that AI system ought to meet in order to be ‘trustworthy’.Footnote 43 The Expert Group then produced Policy and Investment Recommendations for Trustworthy AI in June 2019.Footnote 44 Among the recommendations (along with those pertaining to education, research, government use of AI and investment priorities) is strong criticism of both state and corporate surveillance using AI, including that governments should commit not to engage in mass surveillance and the commercial surveillance of individuals including via ‘free’ services should be countered.Footnote 45 This is furthered by a specific recommendation that AI-enabled ‘mass scoring’ of individuals be banned.Footnote 46 The Panel called for more work to assess existing legal and regulatory frameworks to discern whether they are adequate to address the Panel’s recommendations or whether reform is necessary.Footnote 47

The European Commission released its White Paper on AI in February 2020, setting out an approach based on ‘European values, to promote the development and deployment of AI’.Footnote 48 Among a host of proposals for education, research and innovation, industry collaboration, public sector AI adoption, the Commission asserts that ‘international cooperation on AI matters must be based on an approach which promotes the respect of fundamental rights’ and more bullishly asserts that it will ‘strive to export its values across the world’.Footnote 49

A section of the White Paper is devoted to regulatory frameworks, with the Commission setting out its proposals for a new risk-based regulatory framework for AI targeting ‘high risk’ applications. These applications would be subject to additional requirements including vis-à-vis: training data for AI; the keeping of records and data beyond what is currently required to verify legal compliance and enforcement; the provision of additional information than is currently required, including whether citizens are interacting with a machine rather than a human; ex ante requirements for the robustness and accuracy of AI applications; human oversight; and specific requirements for remote biometric identification systems.Footnote 50 The White Paper has been released for public consultation and follow-up work from the Commission is scheduled for late 2020.

Alongside this activity, the European Parliament debated various reports prepared by MEPs on civil liability, intellectual property and ethics aspects of AI in early 2020.Footnote 51 Issues such as a lack of harmonised approach among EU Member States and lack of harmonised definitions of AI giving rise to legal uncertainty were featured in the reports and debates, as well as calls for more research on specific frameworks such as IP.Footnote 52 MEPs are due to debate and vote on amendments to the reports later in 2020. It is unclear whether COVID-19 disruptions will alter these timelines.

In addition to this activity at the supranational level, EU Member States continue with their own AI governance and ethics activities. This may contribute to the aforementioned divergence in the bloc, a factor which may justify EU-level regulation and standardisation. Prominent among them is Germany, which has its own national AI Strategy from 2018.Footnote 53 In light of competition with other countries such as the United States and China, Germany – in accordance with the principles of the European Union Strategy for Artificial Intelligence – intends to position itself in such a way that it sets itself apart from other, non-European nations through data protection-friendly, trustworthy, and ‘human centred’ AI systems, which are supposed to be used for the common good.Footnote 54 At the centre of those claims is the idea of establishing the ‘AI Made in Germany’ ‘brand’, which is supposed to become a globally acknowledged label of quality. Behind this ‘brand’ is the idea that AI applications made in Germany or, to be more precise, the data sets these AI applications use, come under the umbrella of data sovereignty, informational self-determination and data safety. Moreover, to ensure that AI research and innovation is in line with ethical and legal standards, a Data Ethics Commission was established which can make recommendations to the federal government and give advice on how to use AI in an ethically sound manner.

The Data Ethics Commission issued its first report written by 16 Commission experts, intended as a group of ethical guidelines to ensure safety, prosperity and social cohesion amongst those affected by algorithmic decision-making or AI.Footnote 55 Among other aims promoting human-centred and value-oriented AI design, the report introduces ideas for risk-oriented AI regulation, aimed at strengthening Germany and Europe’s ‘digital sovereignty’. Seventy-five rules are detailed in the report to implement the main ethical principles the report draws upon, namely human dignity, self-determination, privacy, security, democracy, justice, solidarity and sustainability. Operationalising these rules is the subject of a current report ‘From Principles to Practice – An Interdisciplinary Framework to Operationalize AI ethics’, resulting from the work of the interdisciplinary expert Artificial Intelligence Ethics Impact Group (AIEIG), which describes in detail how organisations conducting research and development of AI applications can implement ethical precepts into executable practice.Footnote 56 Another example of this practical approach can be seen in the recent Lernende Systeme (German National Platform for AI) report launching certification proposals for AI applications, which are aimed at inter alia creating legal certainty and increasing public trust in AI through, for example, a labelling system for consumers.Footnote 57 These certification proposals may serve as predecessors for future legal requirements, such as those which may be proposed at the EU level.

India

India’s approach to AI is substantially informed by three initiatives at the national level. The first is Digital India, which aims to make India a digitally empowered knowledge economy;Footnote 58 the second is Make in India, under which the government of India is prioritising AI technology designed and developed in India;Footnote 59 and the third is the Smart Cities Mission.Footnote 60

An AI Task Force constituted by the Ministry of Commerce and Industry in 2017 looked at AI as a socio-economic problem solver at scale. In its report, it identified ten key sectors in which AI should be deployed, including national security, financial technology, manufacturing and agriculture.Footnote 61 Similarly, a National Strategy for Artificial Intelligence was published in 2018 that went further to look at AI as a lever for economic growth and social development, and considers India as a potential ‘garage’ for AI applications.Footnote 62 While both documents mention ethics, they fail to meaningfully engage with issues of fundamental rights, fairness, inclusion and the limits of data-driven decision-making. These are also heavily influenced by the private sector, with civil society and academia, rarely, if ever, being invited into these discussions.

The absence of an explicit legal and ethical framework for AI systems, however, has not stalled deployment. In July 2019, the Union Home Ministry announced plans for the nationwide Automated Facial Recognition System (AFRS) that would use images from CCTV cameras, police raids and newspapers to identify criminals, and enhance information sharing between policing units in the country. This was announced and subsequently developed in the absence of any legal basis. The form and extent of the AFRS directly violates the four-part proportionality test laid down by the Supreme Court of India in August 2017, which laid down that any violation of the fundamental right to privacy must be in pursuit of a legitimate aim, bear a rational connection to the aim and be shown as necessary and proportionate.Footnote 63 In December 2019, facial recognition was reported to have been used by Delhi Police to identify ‘habitual protestors’ and ‘rowdy elements’ against the backdrop of nationwide protests against changes in India’s citizenship law.Footnote 64 In February 2020, the Home Minister stated that over a thousand ‘rioters’ had been identified using facial recognition. Footnote 65

These developments are made even more acute given the absence of data protection legislation in India. The Personal Data Protection Bill carves out significant exceptions for state use of data, with the drafters of the bills themselves publicly expressing concerns about the lack of safeguards in the latest version. The current Personal Data Protection Bill also fails to adequately engage with the question of inferred data, which is particularly important in the context of machine learning. These issues arise in addition to crucial questions for how sensitive personal data is currently processed and shared. India’s biometric identity project, Aadhaar, could also potentially become a central point of AI applications in the future, with a few proposals for use of facial recognition in the last year, although that is not the case currently.

India recently became one of the founding members of the aforementioned Global Partnership on AI.Footnote 66 Apart from this, there is no ethical framework or principles published by the government at the time of writing. It is likely that ethical principles will emerge shortly, following global developments in the context of AI, and public attention on data protection law in the country.

United States of America

Widely believed to rival only China in its domestic research and development of AI,Footnote 67 the US government had been less institutionally active regarding questions of ethics, governance and regulation compared to developments in China and the European Union, until the Trump Administration Executive Order on Maintaining American Leadership in Artificial Intelligence in February 2019.Footnote 68 Prior to this activity, the United States had a stronger record of AI ethics and governance activity from the private and not-for-profit sectors. Various US-headquartered/-originating multinational tech corporations have issued ethics statements on their AI activities, such as Microsoft and Google Alphabet group company DeepMind. Some US-based not-for-profit organisations and foundations have also been active, such as the Future of Life Institute with its twenty-three Asilomar AI Principles.Footnote 69

The 2019 Executive Order has legal force, and created an American AI Initiative guided by five high-level principles to be implemented by the National Science and Technology Council (NSTC) Select Committee on Artificial Intelligence.Footnote 70 These principles include the United States driving development of ‘appropriate technical standards’ and protecting ‘civil liberties, privacy and American values’ in AI applications ‘to fully realize the potential for AI technologies for the American people’.Footnote 71 Internationalisation is included with the view of opening foreign markets for US AI technology and protecting the United States’s critical AI technology ‘from acquisition by strategic competitors and adversarial nations’. Furthermore, executive departments and agencies that engage in AI-related activities including ‘regulat[ing] and provid[ing] guidance for applications of AI technologies’ must adhere to six strategic objectives including protection of ‘American technology, economic and national security, civil liberties, privacy, and values’.

The US Department of Defense also launched its own AI Strategy in February 2019.Footnote 72 The Strategy explicitly mentions US military rivals China and Russia investing in military AI ‘including in applications that raise questions regarding international norms and human rights’, as well as the perceived ‘threat’ of these developments to the United States and ‘the free and open international order’. As part of the Strategy, the Department asserts that it ‘will articulate its vision and guiding principles for using AI in a lawful and ethical manner to promote our values’, and will ‘continue to share our aims, ethical guidelines, and safety procedures to encourage responsible AI development and use by other nations’. The Department also asserted that it would develop principles for AI ethics and safety in defence matters after multistakeholder consultations, with the promotion of the Department’s views to a more global audience, with the seemingly intended consequence that its vision will inform a global set of military AI ethics.

In February 2020, the White House Office of Science and Technology Policy published a report documenting activities in the previous twelve months since the Executive Order was issued.Footnote 73 The report frames activity relating to governance under the heading of ‘Remove Barriers to AI Innovation’, which foregrounds deregulatory language but may be contradicted in part by the need for the United States to ‘providing guidance for the governance of AI consistent with our Nation’s values and by driving the development of appropriate AI technical standards’.Footnote 74 However, there may be no conflict if soft law non-binding ‘guidance’ displaces hard law binding regulatory requirements. In January 2020, the White House published the US AI Regulatory Principles for public comment, which would establish guidance for federal agencies ‘to inform the development of regulatory and non-regulatory approaches regarding technologies and industrial sectors that are empowered or enabled by artificial intelligence (AI) and consider ways to reduce barriers to the development and adoption of AI technologies’.Footnote 75 Specifically, federal agencies are told to ‘avoid regulatory or non-regulatory actions which needlessly hamper AI innovation and growth’, they must assess regulatory actions against the effect on AI innovation and growth and ‘must avoid a precautionary approach’.Footnote 76 Ten principles are set out to guide federal agencies’ activities (reflecting those in the Executive Order), along with suggested non-regulatory approaches such as ‘voluntary consensus standards’ and other activities outside of rulemaking which would fulfil the direction to reduce regulatory barriers (such as increasing public access to government-held data sets).Footnote 77

During 2019 and 2020, the US Food and Drug Administration (FDA) proposed regulatory frameworks for AI-based software as a medical device and draft guidance for clinical decision support software.Footnote 78 The US Patent and Trademark Office (USPTO) issued a public consultation on whether inventions developed by AI should be patentable. These activities could be framed as attempts to clarify how existing frameworks apply to AI applications but do not appear to involve the ‘removal’ of regulatory ‘barriers’.

9.4 Analysis

From the country and region profiles, we can see that AI governance and ethics activities have proliferated at the government level, even among previously reticent administrations such as the United States. India remains an outlier as the only country among our sample with no set of articulated AI governance or ethics principles. This may change, however, with India’s participation in the GPAI initiative.

Themes of competition loom large over AI policies, as regards competition with other ‘large’ countries or jurisdictions. The AI competition between China and the United States as global forerunner in research and development may be reflected in the United States Executive Order being framed around preserving the United States’s competitive position, and also China’s ambition to become the global AI leader in 2030. We now see the European Union entering the fray more explicitly with its wish to export its own values internationally. However, there are also calls for global collaboration on AI ethics and governance, including from all of these actors. In practice, these are not all taking place through traditional multilateral fora such as the UN, as can be seen with the launch of GPAI. Smaller countries such as the Australian example show how they may be ‘followers’ rather than ‘leaders’ as they receive ethical principles and approaches formulated by other similar but larger countries.

In many of the AI ethics/governance statements, we see similar if not the same concepts reappear, such as transparency, explainability, accountability and so forth. Hagendorff has pointed out that these frequently encountered principles are often ‘the most easily operationalized mathematically’, which may account partly for their presence in many initiatives.Footnote 79 Some form of ‘privacy’ or ‘data protection’ also features frequently, even in the absence of robust privacy/data protection laws as in the United States example. In India, AI ethical principles might follow the development of binding data protection legislation which is still pending. Nevertheless, behind some of these shared principles may lie different cultural, legal and philosophical understandings.

There are already different areas of existing law, policy and governance which will apply to AI and its implementations including technology and industrial policy, data protection, fundamental rights, private law, administrative law and so forth. Increasingly the existence of these pre-existing frameworks is being acknowledged in the AI ethics/governance initiatives, although more detailed research may be needed, as the European Parliament draft report on intellectual property and AI indicates. It is important for those to whom AI ethics and governance guidelines are addressed to be aware that they may need to consider, and comply with, further principles and norms in their AI research, development and application, beyond those articulated in AI-specific guidelines. Research on other novel digital technologies suggests that new entrants may not be aware of such pre-existing frameworks and may instead believe that their activities are ‘unregulated’.Footnote 80

On the question of ‘ethics washing’ – or the legal enforceability of AI ethics statements – it is true that almost all of the AI ethics and governance documents we have considered do not have the force of binding law. The US Executive Order is an exception in that regard, although it constitutes more of a series of directions to government agencies rather than a detailed set of legally binding ethical principles. In China and the European Union, there are activities and initiatives to implement aspects of the ethical principles in specific legal frameworks, whether pre-existing or novel. This can be contrasted with Australia, whose ethical principles are purely voluntary, and where discussions of legal amendment for AI are less developed.

However, the limits of legal enforceability can also be seen in the United States example, whereby there is the paradox of a legally enforced deregulatory approach mandated by the Executive Order and the processes it has triggered for other public agencies to forbear from regulating AI in their specific domains unless necessary. In practice, though, the FDA may be circumventing this obstacle by ‘clarifications’ of its existing regulatory practices vis-à-vis AI and medical devices.

In any event, the United States example illustrates that the legal enforceability of AI governance and ethics strategies does not necessarily equate to substantively better outcomes as regards actual AI governance and regulation. Perhaps in addition to ethics washing, we must be attentive towards ‘law washing’, whereby the binding force of law does not necessarily stop unethical uses of AI by government and corporations; or to put it another way, the mere fact that an instrument has a legally binding character does not ensure that it will prevent unethical uses of AI. Both the form and substance of the norms must be evaluated to determine their ‘goodness’.Footnote 81

Furthermore, legal enforceability of norms may be stymied by a lack of practical operationalisation by AI industry players – or that it is not practical to operationalise them. We can see that some governments have taken this aspect seriously and implemented activities, initiatives and guidance on these aspects, usually developed with researchers and industry representatives. It is hoped that this will ensure the practical implementation of legal and ethical principles in AI’s development and avoid situations where the law or norms are developed divorced from the technological reality.

9.5 Conclusion

In this chapter, we have given an overview of the development of AI governance and ethics initiatives in a number of countries and regions, including the world AI research and development leaders China and the United States, and what may be emerging as a regulatory leader in form of the European Union. Since the 2019 Executive Order, the United States has started to catch up China and the European Union regarding domestic legal and policy initiatives. India remains an outlier, with limited activity in this space and no articulated set of AI ethical principles. Australia, with its voluntary ethical principles, may show the challenges a smaller jurisdiction and market faces when larger entities have already taken the lead on a technology law and policy topic.

Legal enforceability of norms is increasingly the focus of activity, usually through an evaluation of pre-existing legal frameworks or the creation of new frameworks and obligations. While the ethics-washing critique still stands to some degree vis-à-vis AI ethics, the focus of activity is moving towards the law – and also practical operationalisation of norms. Nevertheless, this shift in focus may not always produce desirable outcomes. Both the form and substance of AI norms – whether soft law principles or hard law obligations – must be evaluated to determine their ‘goodness’.

A greater historical perspective is also warranted regarding the likelihood of success for AI ethics/governance initiatives, whether as principles or laws, by, for instance, examining the success or otherwise of previous attempts to govern new technologies, such as biotech and the Internet, or to insert ethics in other domains such as medicine.Footnote 82 While there are specificities for each new technology, different predecessor technologies from which it has sprung, as well as different social, economic and political conditions, looking to the historical trajectory of new technologies and their governance may teach us some lessons for AI governance and ethics.

A further issue for research may arise around regulatory or policy arbitrage, whereby organisations or researchers from a particular country or region which does have AI ethics/governance principles engage in ‘jurisdiction shopping’ to a location which does not or has laxer standards to research and develop AI with less ‘constraints’. This offshoring of AI development to ‘less ethical’ countries may already be happening and is something that is largely or completely unaddressed in current AI governance and ethics initiatives.

Footnotes

* This chapter is a revised and updated version of a report the authors wrote in 2019: Angela Daly, Thilo Hagendorff, Li Hui, Monique Mann, Vidushi Marda, Ben Wagner, Wayne Wei Wang and Saskia Witteborn, ‘Artificial Intelligence, Governance and Ethics: Global Perspectives’ (The Chinese University of Hong Kong Faculty of Law Research Paper No. 2019-15, 2019).

We acknowledge the support for this report from Angela Daly’s Chinese University of Hong Kong 2018–2019 Direct Grant for Research 2018–2019 ‘Governing the Future: How Are Major Jurisdictions Tackling the Issue of Artificial Intelligence, Law and Ethics?’.

We also acknowledge the research assistance for the report from Jing Bei and Sunny Ka Long Chan, and the comments and observations from participants in the CUHK Law Global Governance of AI and Ethics workshop, 20–21 June 2019.

1 See, e.g., Cathy O’Neil, Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. (Penguin Random House 2016); Andrew Guthrie Ferguson, The Rise of Big Data Policing Surveillance, Race and the Future of Law Enforcement (NYU Press 2017).

2 See, e.g., Ronald Arkin, ‘Ethical Robots in Warfare’ (2009) 28(1) IEEE Technology & Society Magazine 30; Richard Mason, ‘Four Ethical Issues of the Information Age’ in John Wekert (ed), Computer Ethics (Routledge 2017).

3 See, e.g., Ronald Leenes and Federica Lucivero, ‘Laws on Robots, Laws by Robots, Laws in Robots: Regulating Robot Behaviour by Design’ (2014) 6(2) Law, Innovation & Technology 193; Ryan Calo, ‘Robotics and the Lessons of Cyberlaw’ (2015) 103(3) California Law Review 513; Sandra Wachter, Brett Mittelstadt and Luciano Floridi, ‘Transparent, Explainable, and Accountable AI for Robotics’ (2017) 2(6) Science Robotics 6080.

4 See, e.g., European Commission, ‘European Group on Ethics in Science and New Technologies Statement on Artificial Intelligence, Robotics and “Autonomous” Systems’ (2018) https://ec.europa.eu/research/ege/pdf/ege_ai_statement_2018.pdf accessed 21 June 2020; Sundar Pichai, ‘AI at Google: Our Principles’ (7 June 2018) www.blog.google/technology/ai/ai-principles/ accessed 21 June 2020.

5 Otfried Höffe, Ethik: Eine einführung (C. H. Beck 2013).

6 Iyad Rahwan et al., ‘Machine Behaviour’ (2019) 568(7753) Nature 477.

7 Thilo Hagendorff, ‘The Ethics of AI Ethics. An Evaluation of Guidelines’ (2020) 30 Minds & Machines 99.

8 Future of Life Institute, ‘Asilomar AI Principles’ (2017) https://futureoflife.org/ai-principles accessed 21 June 2020.

9 Ben Wagner, ‘Ethics as an Escape from Regulation: From Ethics-Washing to Ethics-Shopping?’ in Mireille Hildebrandt (ed), Being Profiled. Cogitas ergo sum (Amsterdam University Press 2018).

10 OECD, ‘OECD Principles on AI’ (2019) www.oecd.org/going-digital/ai/principles/ accessed 21 June 2020; G20, ‘Ministerial Statement on Trade and Digital Economy’ (2019) https://trade.ec.europa.eu/doclib/docs/2019/june/tradoc_157920.pdf accessed 21 June 2020.

11 ITU, ‘United Nations Activities on Artificial Intelligence’ (2018) www.itu.int/dms_pub/itu-s/opb/gen/S-GEN-UNACT-2018-1-PDF-E.pdf accessed 21 June 2020.

12 UNESCO, ‘Elaboration of a Recommendation on Ethics of Artificial Intelligence’ https://en.unesco.org/artificial-intelligence/ethics accessed 21 June 2020.

13 Janosch Delcker, ‘US, Russia Block Formal Talks on Whether to Ban “Killer Robots”’ (Politico, 1 September 2018) www.politico.eu/article/killer-robots-us-russia-block-formal-talks-on-whether-to-ban/ accessed 21 June 2020.

14 Government of Canada, ‘Joint Statement from Founding Members of the Global Partnership on Artificial Intelligence’ (15 June 2020) www.canada.ca/en/innovation-science-economic-development/news/2020/06/joint-statement-from-founding-members-of-the-global-partnership-on-artificial-intelligence.html?fbclid=IwAR0QF7jyy0ZwHBm8zkjkRQqjbIgiLd8wt939PbZ7EbLICPdupQwR685dlvw accessed 21 June 2020.

15 See Monique Mann, Angela Daly, Michael Wilson and Nicolas Suzor, ‘The Limits of (Digital) Constitutionalism: Exploring the Privacy-Security (Im)balance in Australia’ (2018) 80(4) International Communication Gazette 369; Monique Mann and Angela Daly, ‘(Big) Data and the North-in-South: Australia’s Informational Imperialism and Digital Colonialism’ (2019) 20(4) Television & New Media 379.

16 Australian Government Department of Industry, Innovation and Science (2019), Artificial Intelligence: Australia’s Ethics Framework (7 November 2019) https://consult.industry.gov.au/strategic-policy/artificial-intelligence-ethics-framework/ accessed 22 June 2020.

17 Australian Government Department of Industry, Science, Energy and Resources, ‘AI Ethics Principles’ www.industry.gov.au/data-and-publications/building-australias-artificial-intelligence-capability/ai-ethics-framework/ai-ethics-principles accessed 22 June 2020.

18 Australian Government Department of Industry, Science, Energy and Resources, ‘Applying the AI Ethics Principles’ www.industry.gov.au/data-and-publications/building-australias-artificial-intelligence-capability/ai-ethics-framework/applying-the-ai-ethics-principles accessed 22 June 2020.

19 Australian Human Rights Commission, ‘Human Rights and Technology’ (17 December 2019) www.humanrights.gov.au/our-work/rights-and-freedoms/projects/human-rights-and-technology accessed 22 June 2020.

20 FLIA. (2017). China’s New Generation of Artificial Intelligence Development Plan (30 July 2017) https://flia.org/notice-state-council-issuing-new-generation-artificial-intelligence-development-plan/ accessed 22 June 2020.

23 中国电子技术标准化研究院 (China Electronics Standardization Institute), ‘人工智能标准化白皮书 (White Paper on AI Standardization)’ (January 2018) www.cesi.cn/images/editor/20180124/20180124135528742.pdf accessed 22 June 2020.

24 国家人工智能标准化总体组 (National AI Standardization Group), ‘人工智能伦理风险分析报告 (Report on the Analysis of AI-Related Ethical Risks)’ (April 2019) www.cesi.cn/images/editor/20190425/20190425142632634001.pdf accessed 22 June 2020. The references include (1) ASILOMAR AI Principles; (2) the Japanese Society for Artificial Intelligence Ethical Guidelines; (3) Montréal Declaration for Responsible AI (draft) Principles; (4) Partnership on Al to Benefit People and Society; (5) the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems.

25 Huw Roberts et al., ‘The Chinese Approach to Artificial Intelligence: An Analysis of Policy and Regulation’ (2020) AI & Society (forthcoming).

26 国家人工智能标准化总体组 (National AI Standardization Group) (Footnote n 24) 31–32.

27 Beijing Academy of Artificial Intelligence, ‘Beijing AI principles’ (28 February 2019) www.baai.ac.cn/blog/beijing-ai-principles accessed 22 June 2020.

28 Graham Webster, ‘Translation: Chinese AI alliance drafts self-discipline “Joint Pledge” (New America Foundation, 17 June 2019) www.newamerica.org/cybersecurity-initiative/digichina/blog/translation-chinese-ai-alliance-drafts-self-discipline-joint-pledge/ accessed 22 June 2020.

29 China Daily, ‘Governance Principles for the New Generation Artificial Intelligence–Developing Responsible Artificial Intelligence’ (17 June 2020) www.chinadaily.com.cn/a/201906/17/WS5d07486ba3103dbf14328ab7.html accessed 22 June 2020.

30 However, the Chinese representative, Baidu, which is the largest search giant in China, has recently left the Partnership on AI amid the current US-China tension. See Will Knight, ‘Baidu Breaks Off an AI Alliance Amid Strained US-China Ties’ (Wired, 18 June 2020) www.wired.com/story/baidu-breaks-ai-alliance-strained-us-china-ties/ accessed 13 August 2020.

31 新京报网 (BJNews), ‘人工智能企业要组建道德委员会,该怎么做? (Shall AI Enterprises Establish an Internal Ethics Board? And How?)’ (2019) www.bjnews.com.cn/feature/2019/07/26/608130.html accessed 15 May 2020.

32 J. Si Towards an Ethical Framework for Artificial Intelligence (2018) https://mp.weixin.qq.com/s/_CbBsrjrTbRkKjUNdmhuqQ.

33 Roberts et al. (Footnote n 25).

34 中新网 (ChinaNews), ‘新兴科技带来风险 加快建立科技伦理审查制度 (As Emerging Technologies Bring Risks, the State Should Accelerate the Establishment of a Scientific and Technological Ethics Review System)’ (9 August 2019) https://m.chinanews.com/wap/detail/zw/gn/2019/08-09/8921353.shtml accessed 22 June 2020.

35 全国信息安全标准化技术委员会 (National Information Security Standardization Technical Committee), ‘人工智能安全标准化白皮书 (2019版) (2019 Artificial Intelligence Security Standardization White Paper)’ (October 2019) www.cesi.cn/images/editor/20191101/20191101115151443.pdf accessed 22 June 2020.

36 Kan He, ‘Feilin v. Baidu: Beijing Internet Court Tackles Protection of AI/Software-Generated Work and Holds that Copyright Only Vests in Works by Human Authors’ (The IPKat, 9 November 2019) http://www.ipkitten.blogspot.com/2019/11/feilin-v-baidu-beijing-internet-court.html%20accessed%2022%20June%202020 http://www.ipkitten.blogspot.com/2019/11/feilin-v-baidu-beijing-internet-court.html accessed 22 June 2020. ‘AI Robot Has IP Rights, Says Shenzhen Court’ (Greater Bay Insight, 6 January 2020) https://greaterbayinsight.com/ai-robot-has-ip-rights-says-shenzhen-court/ accessed 22 June 2020.

38 Benjamin Greze, ‘The Extra-territorial Enforcement of the GDPR: A Genuine Issue and the Quest for Alternatives’ (2019) 9(2) International Data Privacy Law 109.

39 See , e.g., Lilian Edwards and Michael Veale, ‘Slave to the Algorithm? Why a ‘Right to an Explanation’ Is Probably Not the Remedy You Are Looking For’ (2017) 16(1) Duke Law & Technology Review 18; Sandra Wachter, Brett Mittelstadt and Luciano Floridi, ‘Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation’ (2017) 7(2) International Data Privacy Law 76.

40 European Parliament, ‘Resolution of 16 February 2017 with Recommendations to the Commission on Civil Law Rules on Robotics’ (2015/2103(INL)) https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A52017IP0051 accessed 22 June 2020.

42 European Commission, ‘Communication on Artificial Intelligence for Europe’ (COM/2018/237 final, 2018) https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=COM%3A2018%3A237%3AFIN accessed 22 June 2020.

43 European Commission Independent High-Level Expert Group on Artificial Intelligence, Ethics Guidelines for Trustworthy AI (Final Report, 2019) https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai accessed 22 June 2020.

44 European Commission Independent High-Level Expert Group on Artificial Intelligence ‘Policy and Investment Recommendations for Trustworthy AI’ (26 June 2019) https://ec.europa.eu/digital-single-market/en/news/policy-and-investment-recommendations-trustworthy-artificial-intelligence accessed 22 June 2020.

48 European Commission, ‘White Paper on Artificial Intelligence – A European Approach to Excellence and Trust’ (COM(2020) 65 final, 2020) https://ec.europa.eu/info/sites/info/files/commission-white-paper-artificial-intelligence-feb2020_en.pdf accessed 22 June 2020.

51 European Parliament Committee on Legal Affairs, ‘Draft Report with Recommendations to the Commission on a Civil Liability Regime for Artificial Intelligence’ (2020/2014(INL), 2020); European Parliament Committee on Legal Affairs, ‘Draft Report with Recommendations to the Commission on a Framework of Ethical Aspects of Artificial Intelligence, Robotics and Related Technologies’ (2020/2012(INL), 2020); European Parliament Committee on Legal Affairs, ‘Draft Report on Intellectual Property Rights for the Development of Artificial Intelligence Technologies’ (2020/2015(INI), 2020).

52 Samuel Stolton, ‘MEPs Chart Path for a European Approach to Artificial Intelligence’ (EurActiv, 12 May 2020) www.euractiv.com/section/digital/news/meps-chart-path-for-a-european-approach-to-artificial-intelligence/ accessed 22 June 2020.

53 Bundesministerium für Bildung und Forschung; Bundesministerium für Wirtschaft und Energie; Bundesministerium für Arbeit und Soziales, ‘Strategie Künstliche Intelligenz der Bundesregierung’ (15 November 2018) www.bmwi.de/Redaktion/DE/Publikationen/Technologie/strategie-kuenstliche-intelligenz-der-bundesregierung.html accessed 22 June 2020.

54 European Commission (Footnote n 45).

55 Datenethikkommission der Bundesregierung, ‘Gutachten der Datenethikkommission der Bundesregierung’ (2019) www.bmjv.de/SharedDocs/Downloads/DE/Themen/Fokusthemen/Gutachten_DEK_DE.pdf?__blob=publicationFile&v=3 accessed 22 June 2020.

56 Sebastian Hallensleben et al., From Principles to Practice. An Interdisciplinary Framework to Operationalise AI Ethics (Bertelsmann Stiftung 2020).

57 Jessica Heesen, Jörn Müller-Quade and Stefan Wrobel, Zertifizierung von KI-Systemen (München 2020).

58 Government of India Ministry of Electronics & Information Technology, ‘Digital India Programme’ https://digitalindia.gov.in/ accessed 22 June 2020.

59 Government of India Ministry of Finance, ‘Make in India’ www.makeinindia.com/home/ accessed 22 June 2020.

60 Government of India Ministry of Housing and Urban Affairs, ‘Smart Cities Mission’ www.smartcities.gov.in/content/ accessed 22 June 2020; Vidushi Marda, ‘Artificial Intelligence Policy in India: A Framework for Engaging the Limits of Data-Driven Decision-Making’ (2018) 376(2133) Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences.

61 Government of India Ministry of Commerce and Industry, ‘Report of the Artificial Intelligence Task Force’ (20 March 2018) https://dipp.gov.in/sites/default/files/Report_of_Task_Force_on_ArtificialIntelligence_20March2018_2.pdf accessed 22 June 2020.

62 NITI Aayog, ‘National Strategy for Artificial Intelligence’ (discussion paper, June 2018) https://niti.gov.in/writereaddata/files/document_publication/NationalStrategy-for-AI-Discussion-Paper.pdf accessed 22 June 2020.

63 Vidushi Marda, ‘Every Move You Make’ (India Today, 29 November 2019) www.indiatoday.in/magazine/up-front/story/20191209-every-move-you-make-1623400-2019-11-29 accessed 22 June 2020.

64 Jay Mazoomdaar, ‘Delhi Police Film Protests, Run Its Images through Face Recognition Software to Screen Crowd’ (The Indian Express, 28 December 2019) https://indianexpress.com/article/india/police-film-protests-run-its-images-through-face-recognition-software-to-screen-crowd-6188246/ accessed 22 June 2020.

65 Vijaita Singh, ‘1,100 Rioters Identified Using Facial Recognition Technology: Amit Shah’ (The Hindu, 12 March 2020) https://economictimes.indiatimes.com/news/economy/policy/personal-data-protection-bill-can-turn-india-into-orwellian-state-justice-bn-srikrishna/articleshow/72483355.cms accessed 22 June 2020.

66 The New India Express, ‘India Joins GPAI as Founding Member to Support Responsible, Human-Centric Development, Use of AI’ (15 June 2020) www.newindianexpress.com/business/2020/jun/15/india-joins-gpai-as-founding-member-to-support-responsible-human-centric-development-use-of-ai-2156937.html accessed 22 June 2020.

67 Stephen Cave and Sean ÓhÉigeartaigh, ‘An AI Race for Strategic Advantage: Rhetoric and Risks’ (AI Ethics And Society Conference, New Orleans, 2018).

68 US White House, ‘Executive Order on Maintaining American Leadership in Artificial Intelligence’ (11 February 2019) www.whitehouse.gov/presidential-actions/executive-order-maintaining-american-leadership-artificial-intelligence/ accessed 22 June 2020.

69 Future of Life Institute (Footnote n 8).

70 US White House (Footnote n 69).

72 US Department of Defense, ‘Summary of the 2018 Department of Defense Artificial Intelligence strategy: Harnessing AI to Advance Our Security and Prosperity’ (2019) https://media.defense.gov/2019/Feb/12/2002088963/-1/-1/1/SUMMARY-OF-DOD-AI-STRATEGY.PDF accessed 22 June 2020.

73 US White House Office for Science and Technology Policy, ‘American Artificial Intelligence: Year One Annual Report’ (February 2020) www.whitehouse.gov/wp-content/uploads/2020/02/American-AI-Initiative-One-Year-Annual-Report.pdf accessed 22 June 2020.

75 ‘Guidance for Regulation of Artificial Intelligence Applications’ www.whitehouse.gov/wp-content/uploads/2020/01/Draft-OMB-Memo-on-Regulation-of-AI-1-7-19.pdf accessed 22 June 2020.

78 US Food and Drug Administration, ‘Artificial Intelligence and Machine Learning in Software as a Medical Device’ (28 January 2020) www.fda.gov/medical-devices/software-medical-device-samd/artificial-intelligence-and-machine-learning-software-medical-device accessed 22 June 2020; US Food and Drug Administration, ‘Clinical Decision Support Software’ (September 2019) www.fda.gov/regulatory-information/search-fda-guidance-documents/clinical-decision-support-software accessed 22 June 2020.

79 Hagendorff (Footnote n 7).

80 Antonia Horst and Fiona McDonald, ‘Personalisation and Decentralisation: Potential Disrupters in Regulating 3D Printed Medical Products’ (2020) working paper.

81 See Angela Daly, S. Kate Devitt and Monique Mann (eds), Good Data (Institute of Network Cultures 2019).

82 Brett Mittelstadt, ‘Principles Alone Cannot Guarantee Ethical AI’ (2019) 1(11) Nature Machine Intelligence 501.

Save book to Kindle

To save this book to your Kindle, first ensure coreplatform@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×