Skip to main content Accessibility help
×
Hostname: page-component-8448b6f56d-c47g7 Total loading time: 0 Render date: 2024-04-25T01:18:55.668Z Has data issue: false hasContentIssue false

Part IV - International Economic Law Limits to Artificial Intelligence Regulation

Published online by Cambridge University Press:  01 October 2021

Shin-yi Peng
Affiliation:
National Tsing Hua University, Taiwan
Ching-Fu Lin
Affiliation:
National Tsing Hua University, Taiwan
Thomas Streinz
Affiliation:
New York University School of Law

Summary

Type
Chapter
Information
Artificial Intelligence and International Economic Law
Disruption, Regulation, and Reconfiguration
, pp. 235 - 292
Publisher: Cambridge University Press
Print publication year: 2021
Creative Commons
Creative Common License - CCCreative Common License - BYCreative Common License - NCCreative Common License - ND
This content is Open Access and distributed under the terms of the Creative Commons Attribution licence CC-BY-NC-ND 4.0 https://creativecommons.org/cclicenses/

12 Public Morals, Trade Secrets, and the Dilemma of Regulating Automated Driving Systems

Ching-Fu Lin Footnote *
I Introduction

The market for automated driving systems (ADSs, commonly referred to as automated vehicles, autonomous cars, or self-driving cars)Footnote 1 is predicted to grow from US$54.2 billion in 2019 to US$556.6 billion in 2026.Footnote 2 Around 21 million in sales of vehicles equipped with ADSs globally in 2035, and 76 million in sales through 2035,Footnote 3 are expected in an inextricably connected global market of automobiles, information and communication technology (ICT), and artificial intelligence (AI) platforms and services, along a massive value chain that transcends borders. Indeed, ADSs – one of the most promising AI applications – build on software infrastructure that works with sensing technologies such as Light Detection and Ranging (LiDAR), radar, and high-resolution cameras to perform part or all of the dynamic driving tasks.Footnote 4 The ADS industry landscape is complex and dynamic, including not only automobile companies and suppliers (e.g., Daimler AG, Ford Motor Company, BMW AG, Tesla Inc., and Denso Corporation), but also ICT giants (e.g., Waymo, Intel Corporation, Apple Inc., NVIDIA Corporation, Samsung, and Baidu) and novel service providers (e.g., Uber, Lyft, and China’s Didi Chuxing) in different parts of the world. There have also been an increasing number of cross-sectoral collaborative initiatives between such companies, including the partnership between Uber and Toyota to expand the ride-sharing market,Footnote 5 or General Motor’s investment in Lyft, undertaken with the goal of developing self-driving taxis.Footnote 6

While governments around the world have been promoting ADS development and relevant industries,Footnote 7 they have also been contemplating rules and standards in response to its legal, economic, and social ramifications. Apart from road safety and economic development,Footnote 8 ADSs promise to transform the ways in which people commute between places and connect with one another, which will further alter the conventional division of labor, social interactions, and the provision of services. Regulatory requirements for testing and safety, as well as technical standards on cybersecurity and connectivity, are necessary for vehicles with ADSs to be allowed on roadways, but governments worldwide have not established comprehensive and consistent policy frameworks within their jurisdictions because of the experimental nature of related technologies, not to mention multilateral consensus or harmonization. Furthermore, liability rules, insurance policies, and new law enforcement tools are also relevant issues, if not prerequisites. Last but not least, ethical challenges posed by ADSs play a key role in building trust and confidence among consumers, societies, and governments to support the wide and full-scale application. How to align ADS research and development with fundamental ethical principles embedded in a given society – with its own values and cultural contexts – remains a difficult policy question. The “Trolley Problem” aptly demonstrates such tension.Footnote 9 As will be discussed, such challenges not only touch upon substantive norms, such as morality, equality, and justice, but also call for procedural safeguards, such as algorithmic transparency and explainability.

Faced with such challenges, governments are designing and constructing legal and policy infrastructures with diverse forms and substances to facilitate the future of connected transportation. Major players along the global ADS value chain have yet to agree upon a common set of rules and standards to forge regulatory governance on a global scale, partly because of different political agendas and strategic positions.Footnote 10 While it seems essential to have rules and standards that reflect local values and contexts, potential conflicts and duplication may have serious World Trade Organization (WTO) implications. In Section II, this chapter examines key regulatory issues of ADSs along the global supply chain. Regulatory efforts and standard-setting processes among WTO members and international (public and private) organizations also evidence both the convergence and divergence in different issues. While regulatory issues such as liability, cybersecurity, data flow, and infrastructure are multifaceted, complex, and fluid, and certainly merit scholarly investigation, this chapter cannot and does not intend to cover them all. Rather, in Section III, this chapter uses the most controversial (but not futuristic) issue – the ethical dimension of ADSs, which raises tensions between the protection of public morals and trade secrets – to demonstrate the regulatory dilemma faced by regulators and its WTO implications. It points out three levels of key challenges that may translate into a regulatory dilemma in light of WTO members’ rights and obligations, including those in the General Agreement on Tariffs and Trade (GATT), the Agreement on Technical Barriers to Trade (TBT Agreement), and the Agreement on Trade-Related Aspects of Intellectual Property Rights (TRIPS Agreement).Footnote 11 Section IV concludes.

II Automated Driving Systems: Mapping Key Regulatory Issues
A Regulatory Challenges Facing Automated Driving Systems and the “Moral Machine” Dilemma

At the outset, the use of terminology and taxonomy must be clarified. There exist various terms that are used to refer to vehicles equipped with different levels of driving automation systems (a generic term that covers all levels of automation), such as self-driving cars, unmanned vehicles, autonomous cars, and automated vehicles. However, for reasons to be elaborated later, this chapter consciously uses “ADSs” – namely, level 3–5 systems as defined by the SAE International’s taxonomy and definitionsFootnote 12 – to refer to the kinds of driving automation that require only limited human intervention and that more appropriately denote the essence of commonly known terms such as “self-driving cars” or “autonomous vehicles.” Indeed, the inconsistent and sometimes confusing use of terms such as “self-driving cars” or “autonomous vehicles” may lead to problems not only related to misleading marketing practices, mistaken consumer perceptions, and information asymmetry, but also insufficient and ineffective regulatory design. For instance, in the robotics and AI literature, the term “autonomous” has been used to denote systems capable of making decisions and acting “independently and self-sufficiently,”Footnote 13 but the use of such terms “obscures the question of whether a so-called ‘autonomous vehicle’ depends on communication and/or cooperation with outside entities for critical functionality (such as data acquisition and collection).”Footnote 14 Some products may be fully autonomous as long as their functions are executed entirely independently and self-sufficiently to the extent entailed in level 5, while others may depend on external cooperation and connection to work (which may fall under the scope of level 3 or level 4). Yet when the term “autonomous vehicle” is commonly used to refer to level 5, levels 3 and 4, or even all levels of driving automation as defined in various legislation enacted in different states,Footnote 15 regulatory confusion ensues. Comparable conceptual and practical problems can also be found with the use of “self-driving,” “automated,” or “unmanned” in regulatory discourse.

While ADSs offer many benefits to road safety, economic growth, and transportation modernization,Footnote 16 myriad regulatory issues – such as safety, testing and certification, liability and insurance, cybersecurity, data flow, ethics, connectivity, infrastructure, and service – must be appropriately addressed.Footnote 17 First, reducing human errors does not mean that ADSs are free from machine error, especially when the technology continues to grow in complexity.Footnote 18 A review of recent incidents involving Tesla and Volvo-Uber systems suggests that ADSs may be subject to different standards of care, considering the many new safety threats and consumer expectations for the technology.Footnote 19 Other commentators also point to cybersecurity and industry risks related to ADSs, given their reliance on data collection, processing, and transmission through vehicle-to-vehicle and vehicle-to-infrastructure communications.Footnote 20 The multifaceted yet under-addressed issues of privacy and personal freedom also call for clearer rules and standards.Footnote 21 Issues including the Internet of Things (IoT), 5G networks, and smart city development – which are beyond the scope of this chapter – also play a crucial role in the regulatory discourse surrounding ADSs.Footnote 22 The different risks posed by ADSs and IoT and their consequential interactions with the physical world may have crucial ramifications for international trade and investment law.Footnote 23

This chapter will not exhaust all of these regulatory issues, but rather focuses on the most controversial, ethical dimension of ADSs. There are concerns about the “crash algorithms” of ADSs, which are the programs that decide how to respond at the time of unavoidable accidents.Footnote 24 Ethical issues stem from the infamous “Trolley Problem,” a classic thought experiment of utilitarianism vis-à-vis deontological ethics introduced in 1967 by Philippa Foot.Footnote 25 It involves a runaway, out-of-control trolley moving toward five people who are tied up and lying on the main track. You are standing next to a lever that can switch the trolley to a side track, on which only one tied-up person is lying. The problem? Would you pull the lever to save five and kill one? What is the right thing to do? In modern times, the advent of ADSs makes the Trolley Problem, once an exercise of applied philosophy, a real-world challenge rather than an ethical thought experiment.Footnote 26 Should ADSs prioritize the lives of the vehicle’s passengers over those of pedestrians? Should ADSs kill the baby, the doctor, the mayor, the jaywalker, or the grandma? Or should ADSs be programmed to reach a decision that is most beneficial to society as a whole, taking into account a massive range of factors? Researchers at the Massachusetts Institute of Technology (MIT) designed scenarios representing ethical dilemmas that call upon people to identify preferences for males, females, the young, the elderly, low-status individuals, high-status individuals, law-abiding individuals, law-breaking individuals, and even fit or obese pedestrians in a fictional, unavoidable car crash.Footnote 27 They collected and consolidated around 40 million responses provided by millions of individuals from 233 jurisdictions and published their results in an article titled “The Moral Machine Experiment.”Footnote 28 How does the world respond to the Trolley Problem? While a general, global moral preference can be found, there exist strong and diverse demographic variations specifically associated with “modern institutions” and “deep cultural traits.”Footnote 29 For instance, respondents from China, Japan, Taiwan, South Korea, and other East Asian countries prefer saving the elderly over the young, while those in North America and Europe are the opposite.Footnote 30

As ADSs cannot be subjectively assessed ex post for blame or moral responsibility, it seems necessary – yet it is unclear how – to design rules to regulate the reactions of ADSs when faced with moral dilemmas.Footnote 31 Presumably, ethics as well as cultural, demographic, and institutional factors may play a role in likely heterogeneous regulatory measures that could increase frictions in international trade. From a practical, legalist perspective, different tort systems in varying jurisdictions may also have an anchoring effect on ADS designs.Footnote 32 While the decision at the time of unavoidable accidents has immense legal, economic, and moral consequences, it is predetermined when the algorithms are written and built into ADSs. Algorithms are not objective. Rather, they carry the existing biases and discriminations against minority groups in human society, which are reflected and reinforced by the training data used to power the algorithms.Footnote 33 Further, algorithms do not build themselves, so they may carry the values and preferences of people who write or train them.Footnote 34 Therefore, ADS manufacturers are increasing exposed to legal and reputational risks associated with these moral challenges.Footnote 35 Governments have not yet addressed these ethical puzzles posed by ADS algorithms.

B Regulatory Initiatives at National and Transnational Levels

One may ask whether there are existing or emerging international standards that can serve as a reference for domestic regulations. What approaches are regulators in different jurisdictions taking to address these issues? This chapter maps out some representative regulatory initiatives that have taken place at both the national and the transnational level and are respectively backed by public, private, and hybrid institutions – without concrete harmonization.Footnote 36

What are the relevant positions of the governments of these countries in the global value chain of automated vehicles? What are their respective regulatory governance strategies in light of concerns related to economic growth, national security, and business competition?Footnote 37 To what extent are these countries competing (or cooperating) with one another to lead the global standard-setting process in various international arenas?Footnote 38 At the national level, crucial questions have largely been left unaddressed. A leader in regulating ADSs, the United States Department of Transportation (US DoT) has been stocktaking and monitoring current ADS standards development activities, including those led by, inter alia, the SAE International, the International Organization for Standardization (ISO), the Institute of Electrical and Electronics Engineers (IEEE), the Federal Highway Administration (FHWA), the American Association of Motor Vehicle Administrators (AAMVA), and the National Highway Traffic Safety Administration (NHTSA), in relation to issues such as cybersecurity framework, data sharing, functional safety, event data recorders, vehicle interaction, encrypted communications, infrastructure signage and traffic, and testing approaches.Footnote 39 While a couple of initiatives might partly touch upon some issues with ethical implications,Footnote 40 nothing concrete has been designated to address ADSs’ ethical issues. In the United Kingdom, the British Standard Institution published a prestandardization document based on relevant guidelines developed by the UK Department for Transport and Centre for the Protection of National Infrastructure to facilitate further standardization on cybersecurity.Footnote 41 Taiwan also set up a sandbox scheme for the development and testing of vehicles equipped with ADSs,Footnote 42 and the sandbox is open to a broadly defined scope of experimentation, including automobiles, aircraft, and ships, and even a combination of these forms.Footnote 43

Again, none has been initiated to specifically address the ethical issues of ADSs. The world’s firstFootnote 44 concrete government initiative specifically on ADS ethical issues at the moment is the report with twenty ethical rules issued by the Ethics Commission for Automated and Connected Driving, a special body appointed by Germany’s Federal Ministry of Transport and Digital Infrastructure.Footnote 45 The report consists of twenty ethical rules for ADSs.Footnote 46 Of importance are the ethical rules, which ask that “[t]he protection of individuals takes precedence over all other utilitarian considerations,”Footnote 47 that “[t]he personal responsibility of individuals for taking decisions is an expression of a society centred on individual human beings,”Footnote 48 and that “[i]n hazardous situations that prove to be unavoidable, the protection of human life enjoys top priority in a balancing of legally protected interests.”Footnote 49 In particular, Ethical Rule 8 provides that:

Genuine dilemmatic decisions, such as a decision between one human life and another … can thus not be clearly standardized, nor can they be programmed such that they are ethically unquestionable …. Such legal judgements, made in retrospect and taking special circumstances into account, cannot readily be transformed into abstract/general ex ante appraisals and thus also not into corresponding programming activities.Footnote 50

Ethical Rule 9 further prescribes that “[i]n the event of unavoidable accident situations, any distinction based on personal features,” such as age, gender, and physical or mental conditions, “is strictly prohibited.”Footnote 51 While the ethical rules are not mandatory, they certainly mark the first step toward addressing ADSs’ ethical challenges.Footnote 52 It remains to be seen how these ethics rules will be translated into future legislations and regulations in Germany and beyond.Footnote 53

Other relevant initiatives, while not specifically addressing ADS ethical issues, include algorithmic accountability rules (generally applicable to data protection and AI applications) that may inform future regulations. For instance, the European Union’s General Data Protection Regulation (GDPR) sets out rights and obligations in relation to algorithmic explainability and accountability in automated individual decision-making.Footnote 54 The European Commission also established the High-Level Expert Group on Artificial Intelligence in 2018, which published the final version of its Ethics Guidelines for Trustworthy Artificial Intelligence in April 2019.Footnote 55 At the same time, lawmakers in the United States recently tabled a new bill, the Algorithmic Accountability Act of 2019, which intends to require companies to audit systems based on machine learning algorithms, to examine instances of potential bias and discrimination therein, and to fix any issues found in a timely manner.Footnote 56

There have been active and dynamic regulatory initiatives at the transnational level.Footnote 57 The United Nations Economic Council for Europe (UNECE)Footnote 58 and the 1968 Vienna Convention on Road TrafficFootnote 59 have struggled to change the formal rules under their existing framework, given the complexity of the issues, high negotiation costs, and institutional inflexibility.Footnote 60 The Vienna Convention was somewhat passive in the development of driving automated systems until an amendment to its Articles 8 and 39 entered into force in March 2016.Footnote 61 The amendment allows for the transfer of driving tasks from humans to vehicles under certain conditions, lifting the formalistic requirement that a “human” must be in charge of driving tasks.Footnote 62 In September 2018, the UNECE’s Global Forum on Road Traffic Safety (WP.1) adopted a resolution to promote the deployment of vehicles equipped with ADSs in road traffic.Footnote 63 This resolution is rather soft and represents an informal approach to guiding Contracting Parties to the 1968 Vienna Conventions on the safe deployment of ADSs in road traffic.Footnote 64 In any case, because major ADS players like the United States, China, and Japan are not contracting parties to the Vienna Convention, what will be done under the treaty body may not readily generate direct policy relevance and normative influence at the national level (at least for the moment). A few additional private and hybrid organizations have also been engaging in ADS standard-setting, including the SAE International,Footnote 65 the ISO,Footnote 66 and the IEEE.Footnote 67 Among such standard-setting bodies, the SAE International and the ISO are the most comprehensive, cited, and embraced references. Given the complex and dynamic nature of ADS technologies, the SAE International and the ISO, as informal, private/hybrid bodies with more institutional flexibility, have been able to incorporate their members’ expertise to work together in developing common standards – SAE/ISO standards on road vehicle and intelligent transportation systems.Footnote 68 The SAE International further offers the ISO a Secretariat function and services for ISO’s TC204 Intelligent Transport System work.Footnote 69 With its transnational scope, domain expertise, and industry support, the SAE International’s standards, especially the recent clarification and definition of the J3016 standard’s six levels of driving automation, serve as the “most-cited reference” for the ADS industry and governance.Footnote 70 While there has been progress at the transnational level, these regulatory initiatives have yet to touch upon contentious ethical issues that extend beyond the narrower understanding of road safety of ADS.Footnote 71

III Regulatory Autonomy under the World Trade Organization: Technical Standards, Public Morals, and Trade Secrets

As noted, the complex ethical questions, algorithmic designs, and cultural, demographic, and institutional factors may readily be translated into heterogeneous regulatory measures that could increase frictions in international trade and bring about highly contentious issues under the GATT, TBT Agreement, and TRIPS Agreement. These potential frictions beg the questions: How much room in terms of regulatory autonomy will WTO members enjoy in addressing relevant public and moral challenges by conditioning the import and sale of fully autonomous vehicles and dictating the design of ADS algorithms to reflect and respect their local values? What are the normative boundaries set by the relevant covered agreements? Bearing this in mind, this chapter uses the ethical dimensions of ADSs as an example to identify three levels of challenges, in terms of the substance, form, and manner of regulation, for WTO members in regulating this evolving technology.

As the MIT research demonstrated, while a general sense of global moral preference may be identified, there are salient diversities in terms of demographic variations, modern institutions, and cultural underpinnings.Footnote 72 It is therefore likely that some regulators in East Asian countries may adopt technical standards that uphold collective public moral and communal values in their efforts to regulate ADSs. Such technical standards may in turn prevent vehicles whose ADS algorithms (which may be trained with data collected from Western societies or written by programmers who do not embrace similar preferences) do not reflect such local ethics and values from entering the market. For instance, if China requires that ADS algorithms built into fully autonomous vehicles must make decisions about unavoidable crashes based on pedestrians’ “social status” or even their “social credit scores,”Footnote 73 and vehicles that do not run on compliant algorithms will not be allowed in the market, what are the legal and policy implications under the GATT and TBT Agreement? To achieve similar regulatory objectives, WTO members may require ADS manufacturers to disclose their algorithm designs (including source code and training data) to verify and ensure conformity to applicable technical standards. In this case, what boundaries are established in the TRIPS Agreement that may prohibit WTO members from forcing disclosure of trade secrets (or other forms of intellectual property)?

A Public Moral Exception, Technical Regulations, and International Standards

First, import bans on vehicles equipped with ADSs because they are designed and manufactured in a jurisdiction and a manner that reflect a different value set, even if they are reasonable, could violate the national treatment or most favored nation obligations under the GATT. Certainly it would be interesting to see whether vehicles equipped with ADS algorithms that are trained with different data reflecting different cultural and ethical preferences are “like products,”Footnote 74 or whether ADSs with “pet-friendly,” “kids-friendly,” and “elderly-friendly” algorithms are like products. How would diverse consumer morals in a given market influence the determination of likeness? The determination of likeness is “about the nature and extent of a competitive relationship between and among the products at issue,”Footnote 75 and underlying regulatory concerns “may play a role” only if “they are relevant to the examination of certain ‘likeness’ criteria and are reflected in the products’ competitive relationship.”Footnote 76 Given the compliance costs and the distributional role of the global value chain, “even-handed regulation would be found to treat like products less favorably.”Footnote 77 Furthermore, to discipline algorithm designs in terms of how source codes are written and what/how training data are fed, WTO members would need to regulate not only the end product, but also the process and production methods, which remain controversial issues in WTO jurisprudence.Footnote 78 Nevertheless, even if a violation of Article I or III is found, such measures may well be justified under GATT Article XX(a), namely when they are “necessary to protect public morals” and “not applied in a manner which would constitute a means of arbitrary or unjustifiable discrimination between countries where the same conditions prevail, or a disguised restriction on international trade” – the so-called two-tier test.Footnote 79 Most other free trade agreements also contain such a standard exception, allowing parties to derogate from their obligations to protect public morals. Similar clauses can also be found in GATS Article XIV(a)Footnote 80 and TBT Agreement Article 2.2. Further examinations include whether the measures are “designed to protect public morals,”Footnote 81 and whether they are “necessary” based on a weighing and balancing process.Footnote 82 Such a process has been the yardstick of the GATT Article XX necessity test, which is, as reaffirmed by the Appellate Body in China – Publications and Audiovisual Products, “a sequential process of weighing and balancing a series of factors,” including assessing the relative importance of the values or interests pursued by the measure at issue, considering other relevant factors, and comparing the measure at issue with possible alternatives in terms of reasonable availability and trade restrictiveness.Footnote 83 Most importantly, the definition and scope of “public morals” can be highly contentious, and WTO adjudicators have embraced a deferential interpretation:

[T]he term “public morals” denotes standards of right and wrong conduct maintained by or on behalf of a community or nation … the content of these concepts for Members can vary in time and space, depending upon a range of factors, including prevailing social, cultural, ethical and religious values … Members, in applying this and other similar societal concepts, should be given some scope to define and apply for themselves the concepts of “public morals” … in their respective territories, according to their own systems and scales of values.Footnote 84

More recently, the Appellate Body in EC–Seal Products also emphasized that WTO members must be given some scope to define and apply the idea of “public morals” pursuant to their own systems and values.Footnote 85 Given this deferential approach, WTO members appear to enjoy ample leeway in defining and applying public moral-based measures according to their own unique social systems and communal values.

Further, because the TBT Agreement cumulatively applies in conjunction with the GATT, an ADS regulatory measure that is justified may still violate the TBT Agreement, which similarly contains nondiscrimination obligations but lacks public moral exceptions. According to Trachtman, “the scope of the TBT national treatment requirement has been interpreted somewhat narrowly compared to that of GATT, excluding from violation measures that ‘stem exclusively from a legitimate regulatory distinction,’ in order to avoid invalidating a broader scope of national technical regulations than the GATT.”Footnote 86 Under Articles 2.1 and 2.2 of the TBT Agreement, ADS regulatory measures are required to be sufficiently “calibrated” to different conditions in different areas, and to not be “more trade-restrictive than necessary to fulfill a legitimate objective, taking account of the risks non-fulfillment would create.”Footnote 87 That is, similarly to the jurisprudence in the GATT, a holistic weighing and balancing process taking into account the degree of contribution, levels of trade restrictiveness, and the risks of non-fulfillment of the stated objectives as well as a comparison with possible alternatives are mandated.Footnote 88 As will be demonstrated next, the necessity of regulatory measures that focus on mandatory disclosure of source codes and training data (both the substance and form of the regulation) may be fiercely challenged; at the same time, locating a reasonably available alternative can be equally problematic.

Given the transnational regulatory initiatives, Article 2.4 of the TBT Agreement also plays a crucial role here. WTO members are required to use the standards developed by the SAE/ISO and UNECE (so long as they are “relevant international standards”) as the bases for domestic regulations unless such standards cannot effectively or appropriately fulfill the legitimate objective of protecting public morals in the ADS issue area.Footnote 89 While this may impose certain (albeit weak) restrictions on the regulatory autonomy and flexibility of WTO members when designing and imposing their ADS algorithm rules and standards in the ethical dimension,Footnote 90 the implausible (if not impossible) global consensus on ethical decision-making means that such international standards remain far out of reach. In the long run, there might be more and more initiatives of international standards in this regard, potentially resulting in concerns over the structure, process, and participation in a standard-setting body as well as political confrontations at the TBT Committee.Footnote 91

B Automated Driving System Algorithms, Source Codes, and Training Data as “Undisclosed Information” under the TRIPS Agreement

Even if the substance of ADS regulatory measures does not violate existing obligations under the GATT and TBT Agreement, WTO members may require ADS manufacturers to disclose their algorithms designs, source code, and training data to verify compliance and achieve their regulatory objectives. If WTO members force ADS vehicle manufacturers or programmers to disclose their trade secrets – proprietary algorithm designs, source codes, and training data – can they survive the test of the TRIPS Agreement? To be sure, entities that own ADS algorithms can seek protection via various channels including patents, copyrights, and trade secrets.Footnote 92 However, the commercial practice in the ADS field (and many other AI applications) has been to hold both source code and training data as trade secrets to maximize the protection of interests and to remain competitive in the market.Footnote 93

Article 39.1 of the TRIPS Agreement requires members, when “ensuring effective protection against unfair competition as provided in Article 10bis of the Paris Convention (1967),” to “protect undisclosed information in accordance with paragraph 2.”Footnote 94 Article 39.2 further provides that information – when it is secret (not “generally known among or readily accessible to persons within the circles that normally deal with the kind of information in question”), has commercial value, and is controlled by the lawful custodian – shall be protected “from being disclosed to, acquired by, or used by others without their consent in a manner contrary to honest commercial practices.”Footnote 95 This requires WTO members to provide minimum protections for undisclosed information, recognized in Article 1.2 as a category of intellectual property,Footnote 96 in accordance with the conditions and criteria provided in Article 39.2.Footnote 97

Article 39 does not explicitly prohibit members from promulgating laws, consistent with other provisions of the TRIPS Agreement, to allow lawful disclosure or create exceptions where trade secrets may lawfully be forced to be disclosed. Yet what may constitute a lawful disclosure under the TRIPS Agreement can also be controversial. Can members promulgate any law that requires disclosure of trade secrets to serve certain regulatory objectives? Are all measures regulating ADSs and requiring disclosure of source code and training data for conformity assessment lawful and consistent with the TRIPS Agreement? There has been no case law related to Article 39, but the fact that the United States’ proposal to include “theft, bribery, [and] espionage” of secrets in “a manner contrary to honest commercial practices”Footnote 98 was rejected in the negotiation process indicates that what may constitute a lawful disclosure can also prove contentious.Footnote 99 A contextual reading of TRIPS Agreement Articles 7 and 8 suggests that “Members may … adopt measures necessary to … promote the public interest in sectors of vital importance to their socio-economic and technological development,”Footnote 100 and “a balance of rights and obligations”Footnote 101 is called for, but such measures cannot “unreasonably restrain trade.”Footnote 102 The scope of disclosure, the regulated entities, the manner of disclosure, and enforcement and safeguard may therefore be crucial factors in determining consistency. In this sense, in China’s social credit scenario, a limited approach that requires essential source code and training data (from companies that program the algorithms making ethical decisions, instead of all of the actors along the global ADS supply chain) to be disclosed to an expert committee (or similar institutional designs)Footnote 103 for review and certification, rather than a wholesale, systematic forced disclosure, may appear to be more TRIPS-consistent. Additional safeguards that prohibit government agencies from sharing disclosed proprietary information with others may also help to avoid inappropriate forced technology transfers, unfair competition, and unfair commercial use.Footnote 104 Relatedly, some recent megaregional free trade agreements (mega-FTAs) have included provisions that explicitly prevent governments from demanding access to an enterprise’s proprietary software source code.Footnote 105 Demands for stronger protection of source code and training data and limitations on governments’ regulatory room for maneuver are likely to grow in the age of AI.

C “Algorithmic Black Box” and the Limits of Regulatory Measures

An additional layer of regulatory challenge that may prevent the effectiveness (therefore necessity) of these measures stems from the technological nature of machine/deep learning algorithms – its opaque characteristic, or as criticized by a leading commentator, the “black box” problem.Footnote 106 This problem refers to the complexity and secrecy of algorithm-based (especially deep learning-based) decision-making processes, which frustrates meaningful scrutiny and regulation. Without understanding and addressing the black box challenge, it may be unrealistic to rely on disclosure or source codes as a regulatory approach. The black box problem can further be disentangled into “legal black box” and “technical black box.”Footnote 107 The “legal black box” is opaque because of the proprietary status of complex statistical models or source codes, as they are legally protected by trade secret laws.Footnote 108 Regulatory measures focusing on forced disclosure are one way to fix such black box problems by unpacking the algorithms therein to secure a certain level of compliance.

However, the “technical black box,” which arises in applications based on machine/deep learning algorithms, is much more problematic.Footnote 109 A technically inherent lack of transparency persists as decisions and classifications emerge automatically in ways that no one – even the programmers themselves – can adequately explain in human-intelligible terms why and how certain decisions and classifications are reached.Footnote 110 There exists “no well-defined method to easily interpret the relative strength of each input and to each output in the network” due to the highly nonlinear technological characteristic.Footnote 111 Therefore, the measures that are limited to legally forced disclosure can hardly address this technical black box problem. Even if the regulator forces ADS manufacturers to disclose source codes and algorithm designs, the level of compliance may not be effectively ascertained and evaluated. Because of this technical black box problem, regulatory measures designed to disclose source codes and ensure compliance with ethical rules on ADSs (hence the rational nexus between regulatory means and objectives) may be significantly frustrated.

IV Conclusion

ADSs promise to transform modern transportation, conventional division of labor, social interactions, and provision of services. However, when vehicles equipped with different levels of ADSs enter the market, a range of regulatory issues should be addressed. In particular, the ethical puzzles pose formidable and multifaceted challenges to governments to act individually and collectively in delivering good ADS governance. As analyzed by this chapter, complex ethical questions, algorithmic designs, and cultural, demographic, and institutional factors may readily be translated into heterogeneous regulatory measures that could increase frictions in international trade and bring about highly contentious issues in the WTO. This chapter used ADS ethics as a vantage point to identify and unpack three levels of challenges WTO members may face in addressing public moral issues by conditioning the import and sale and dictating the design of ADS to reflect and respect their local values. These challenges may well translate into a regulatory dilemma for WTO members. Premised upon a review of regulatory initiatives at national and transnational levels, this chapter not only identified the normative boundaries set by the relevant WTO-covered agreements but also highlighted the inherent limitations of potential regulatory measures due to the technological nature of AI,Footnote 112 which call for a reconceptualization of the forms and substances of regulations on such evolving technology.

13 International Trade Law and Data Ethics Possibilities and Challenges

Neha Mishra
I Introduction

The global economy is constantly being reshaped because of the rapid growth of data-driven services and technologies. The complementary relationship of big data analytics and artificial intelligence (AI)Footnote 1 holds the potential to generate significant economic and social benefits.Footnote 2 However, such data-driven services can also be misused by companies, governments and cyber-criminals in different ways, resulting in increased privacy and security breaches; disinformation campaigns; and biased algorithmic decision-making that disempower users of such technologies/services.Footnote 3 These misuses often result because of deficiencies/loopholes in how data-driven services collect, process, transfer and share data, as well as the technical design of their algorithms or computer programs, thereby raising strong concerns regarding the ethics of data management and data-driven technologies. In response to these concerns, several governments and private initiatives have formulated data ethics frameworks that regulate data-driven technologies.Footnote 4 Similarly, scholars have started evaluating how data ethics principles can act as a ‘moral compass’ in determining ‘good’ digital regulation and governance.Footnote 5 Some governments have translated these ethical frameworks applicable to data-driven services and technologies into binding laws and regulations (or ‘data ethics-related measures’).

In some cases, data ethics-related measures can have a trade-restrictive impact. For instance, in order to protect personal privacy, governments could restrict the cross-border transfer and processing of personal data that could be burdensome and inefficient, especially for foreign companies. Governments may also demand mandatory access to vital technical information of companies such as the source code and algorithms of their data-driven technologies so as to ensure they are robust, fair and non-discriminatory. Further, as platforms increasingly use automated processes to moderate online content,Footnote 6 governments might desire to scrutinise these algorithms to ensure compliance with domestic censorship laws. Such measures may be more burdensome for foreign companies, especially if they prejudice the safety and integrity of their proprietary technologies. Governments may also prescribe specific domestic standards for data-driven services, which may or may not be compatible with global standards.Footnote 7 Such measures can interfere with the cross-border supply of digital services and technologies and thus act as trade barriers.Footnote 8 However, to date, neither scholars nor policy experts have examined the interface of international trade law and data ethics. For instance, the World Trade Report 2018 of the World Trade Organization (WTO), which focused on AI, mentioned the word ‘ethics’ only once.Footnote 9

Given these gaps in the existing literature, this chapter addresses whether international trade agreements, such as the WTO’s General Agreement on Trade in Services (GATS), provide sufficient policy space to governments to implement data ethics-related measures, despite their possible trade-restrictive effect. More specifically, this chapter explores the role of general exceptions in GATS (art. XIV) in delineating WTO members’Footnote 10 policy space to implement data ethics-related measures. Section II discusses the key principles of data ethics common to various policy frameworks, including the protection of human rights; algorithmic accountability; and ethical design. Further, this section highlights examples of government measures intended to implement these data ethics principles, and if and when such measures have a trade-restrictive impact.

Section III examines the interface of international trade law and data ethics in light of the general exceptions in GATS art. XIV. This section argues that GATS art. XIV contains relevant defences for data ethics-related measures. For instance, members may argue that their measures are necessary to achieve compliance with domestic laws, including privacy laws (GATS art. XIV(c)(ii)) or to protect public morals or maintain public order (GATS art. XIV(a)). An evolutionary interpretation of GATS art. XIV can cover several data ethics concerns. However, regulatory diversity across countries and the evolving nature of data ethics frameworks set out a difficult test for assessing the limits of GATS art. XIV, especially examining the core rationale underlying data ethics-related measures, and identifying the least burdensome and trade-restrictive means to realise policy goals enshrined in data ethics frameworks.

Ultimately, applying international trade agreements to data ethics-related measures offers both possibilities and challenges. For instance, WTO panelsFootnote 11 can meaningfully apply GATS art. XIV to accommodate data ethics principles within the WTO framework, including by referring to relevant private/transnational technical standards on data-driven services and international/multi-stakeholder norms on data ethics and governance. Similarly, using both technological and legal evidence, panels can apply the necessity test in GATS art. XIV to curtail protectionist measures that governments have disguised as being necessary for implementing data ethics principles. However, panels also face the challenge of balancing dynamic domestic and transnational interests related to ethical data governance. In order to better engage with these possibilities and challenges, this chapter recommends that the WTO should open itself to policy developments in data governance as well as remain abreast of technological advances, especially in the designing and verification of digital technologies and services.

II Implementing Data Ethics Principles and Their Trade Repercussions

Across the world, governments are developing frameworks and high-level principles on data ethics, particularly for AI-driven sectors.Footnote 12 Subsection A of this section discusses certain key principles common to these frameworks such as protection of human rights, including individual privacy; algorithmic accountability; and ethical design. It also provides examples of measures that governments impose when intending to realise these principles. Subsection B then highlights the potential trade-restrictive impact of certain data ethics-related measures.

A Key Principles of Data Ethics

The fundamental component of all data ethics frameworks is the protection of human rights.Footnote 13 Several international and regional instruments highlight the importance of a human rights-centric approach in data governance.Footnote 14 Similarly, individual governments specifically recognise the importance of protecting human rights in the use of data-driven technologies.Footnote 15 The essence of a human rights-centric approach involves increasing individual control over personal data, and ensuring that all data is used, processed and shared in a manner compliant with fundamental human rights.

In this regard, the human rights-centric approach entails protecting individuals against discrimination, promoting digital access and inclusion, and safeguarding individual privacy.Footnote 16 From the perspective of data ethics, privacy is essential at all stages of data management, from ensuring informed consent of individuals in the collection of their personal data to increasing human control over all aspects of data processing, including the choice not to be subject to profiling and automated decision-making. The emergence of big data analytics also raises concerns around group privacy (although it remains debatable if this falls within the scope of personal privacy).Footnote 17 Unsurprisingly, various domestic laws and regulations now deal with privacy concerns, including data protection laws.Footnote 18

Data-driven technologies can be used to breach human rights other than the right to privacy in various ways. For example, AI algorithms using training data with sensitive variables such as gender and race often generate biased outcomes or decisions that adversely affect the fundamental rights of minority groups.Footnote 19 Big data analytics can be used to identify and then persecute political minorities or dissidents.Footnote 20 Further, governments increasingly use automated algorithms to filter content online, potentially harming the right to freedom of expression and access to information.Footnote 21

A human rights-centric approach in data governance has implications for both governments and the private sector. For instance, governments are required to respect, protect and fulfil human rightsFootnote 22 by ensuring fair and non-discriminatory use of data-driven technologies for public functions; protecting individuals from potential harms and misuses of data-driven technologies by private sector entities, including enforcement of regulations requiring transparent and non-discriminatory data practices by private entities; and ensuring that private companies provide appropriate remedies to affected individuals. Governments may also require businesses to change specific practices in data management and processing to ensure compliance with a human rights-centric approach in data governance. However, the structural mechanisms by which governments hold the private sector accountable for complying with human rights norms may vary across countries. This difference is attributable to varying perceptions among countries regarding how human rights should be formulated and enforced domestically.

A human rights-centric approach in the governance of data-driven technologies necessitates algorithmic accountability. This means that companies should be held responsible for how their algorithms function, including the decisions taken using them. For instance, in AI-driven technologies, huge datasets (known as training data) are used for predictive analytics and generating decisions in various areas including healthcare, credit reporting, law enforcement, retail and marketing. Several experts argue that increasing algorithmic accountability requires data-driven technologies to be transparent and explainable (i.e. the computer programmers must be able to explain how their algorithms/designs use and process data to generate certain results).Footnote 23 This can facilitate rectifying algorithms that generate unfair or discriminatory outcomes.Footnote 24 Algorithms can be explained at a systemic level (i.e. the logic of an algorithm) or at an individual level (i.e. how the algorithm decides in a specific case),Footnote 25 although this distinction remains debatable.Footnote 26

Significant debate exists regarding the extent to which algorithms are or can be explainable and what regulatory mechanisms are needed to achieve the same. Certain experts argue that the transparency of source code/algorithms allows understanding the decision-making rule of the algorithms, but not their functionality in every random set of circumstances.Footnote 27 Therefore, they suggest that alternative technological mechanisms must be explored to achieve stronger algorithmic accountability such as verification programs that ex ante check if algorithms meet certain specifications (e.g. if they comply with the rule of law), and holding designers/technology companies accountable if and when a program fails to meet those specifications.Footnote 28 Others argue that explainability can be achieved through transparency and adequate regulatory inspection of algorithms.Footnote 29 On a different note, some experts emphasise that policymakers must be concerned about how data scientists build their datasets and the possible deficiencies in that process rather than solely concentrating on algorithmic accountability.Footnote 30

While it is outside the scope of this chapter to explore these arguments in detail, the diversity of perspectives on algorithmic accountability, including transparency, leads to differing regulatory approaches across countries. This is important because governments are increasingly advocating that transparency and explainability of algorithms is a means to achieving accountability in data-driven technologies.Footnote 31 However, certain governments also acknowledge the limitations of transparency and explainability mechanisms in ensuring algorithmic accountability.Footnote 32 Separately, governments may be concerned about the potential trade-offs between transparency and accuracy of algorithms.

The General Data Protection Regulation (GDPR) of the European Union (EU) arguably incorporates important elements of data ethics.Footnote 33 GDPR arts 44 and 45 limit data transfers to outside the EU to ensure that all personal data of EU residents is processed according to the highest data protection standards. GDPR art. 12 imposes an obligation on the data controllers to provide concise, transparent, easily understandable and accessible information to individuals regarding how they use personal data, including the extent to which they may use or rely upon personal data for automated decision-making.Footnote 34 GDPR art. 22 provides an individual the right not to be subjected to a decision solely based on automated decision-making or profiling,Footnote 35 if such a decision has ‘legal effects’ or ‘significantly affects’ the concerned individuals. However, significant debate exists regarding whether GDPR art. 22 incorporates a right to explainability of algorithms, for instance, those used in AI technologies.Footnote 36

More recently, other domestic laws have started focusing on data ethics. For instance, the Digital Republic Act in France requires that all algorithmic decision-making by governments should be fully explainable.Footnote 37 In the USA, certain senators have proposed an Algorithmic Accountability Act, requiring companies to scrutinise their algorithms for potential risks and biases, thereby enabling greater algorithmic accountability.Footnote 38 Finally, certain regional trade agreements include provisions requiring the parties to adopt basic frameworks on data protection.Footnote 39 The recently concluded Digital Economy Partnership Agreement between New Zealand, Singapore and Chile includes a specific provision requiring the parties to endeavour to adopt ethical AI governance frameworks, although it only vaguely refers to ‘internationally recognised principles or guidelines’.Footnote 40

Another key element in data ethics is ethical design, which is an extension of a human rights-centric approach in data governance. In practice, ethical design requires that all suppliers of data-driven technologies devise and implement technical designs and standards compliant with human rights. For example, privacy-by-design and security-by-design measures require digital service suppliers to use digital technologies and implement corporate policies that, by default, ensure data privacy and security. This can be instrumental in protecting personal data and increasing trust in data-driven technologies. Further, as ethical design focuses on technologically robust solutions, it promotes more reliable and sustainable outcomes in comparison to prescriptive data localisation measures or mandatory use of indigenous technical standards. GDPR art. 25 requires all digital service suppliers in the EU to adopt EU data protection principles by design and by default.

In practice, however, implementing ethical design is difficult. This challenge arises as the appropriate standards and benchmarks in the digital sector remain controversial, both in terms of regulatory practices and industry practices. For instance, with respect to privacy, considerable debate exists regarding whether the GDPR should be considered a global standard.Footnote 41 Similarly, technical standards developed by leading digital powers such as the USA and China are often market competitors, especially for AI-driven services.Footnote 42 Further, while laws and regulations tend to be ambiguous in their meaning (e.g. what is personally identifiable information in a privacy law), engineering models are highly dependent on precision of definitions in designing robust and reliable technologies.Footnote 43

B Trade Implications of Data Ethics-Related Measures

As discussed in Section I, certain data ethics-related measures may be trade-restrictive as they hinder the cross-border supply of digital services, thereby breaching members’ obligations in WTO agreements. Some examples include: (i) restrictions on data processing or transfers; (ii) prescribing specific technical standards for digital services and products; and (iii) requiring digital technology providers to submit their algorithms, source code and other vital technical information for government scrutiny/audit.

Governments may impose restrictions on cross-border data flows/processing or even require local storage and processing of data in sensitive sectors, to safeguard individual privacy rights. Some data protection laws even restrict the use of personal data for profiling. In other cases, regulatory approvals may be required to process sensitive data outside of the borders of a country. These measures typically increase costs, especially for foreign companies, lacking local data storage or processing capabilities.Footnote 44 When the regulatory requirements for trans-border data transfers/processing are administered in an unfair or unreasonable manner, they may be inconsistent with domestic regulation provision in GATS art. VI. Further, data processing restrictions may affect the development and accuracy of AI technologies as they prevent data accumulation on a global scale, especially affecting foreign, multi-national suppliers. Such measures may be considered discriminatory against foreign services or service suppliers, potentially breaching national treatment obligation in GATS art. XVII. Under the GDPR, digital service suppliers in the EU face several restrictions in transferring and processing personal data of EU residents abroad (except for a select group of countries that the EU identifies as having an adequate framework of data protection).Footnote 45 This restriction on the transfer of personal data to non-EU countries may be inconsistent with the most favoured nation obligation in GATS art. II.

As data-driven services have become common, several governments have started prescribing domestic technical standards, especially in AI-related sectors. These technical standards may be imposed for a variety of reasons, including ensuring that digital technologies are robust and secure, thereby reducing the chances of misuse of data. In the future, governments may prescribe standards that they consider compliant with ethical design requirements. However, if such prescribed standards are incompatible with competitive global standards or extremely onerous to implement, they create barriers for foreign services and service suppliers. In such scenarios, domestic technical standards may violate disciplines on domestic regulation under GATS art. VI.

Requirements imposed on digital technology providers to submit their algorithms and source code for government scrutiny/audit could have an underlying data ethics rationale, but such measures could also be trade-restrictive.Footnote 46 For instance, such measures may restrict entry of foreign competitors in domestic markets, thereby breaching national treatment obligation contained in GATS art. XVII. Additionally, such measures can prejudice the security/reputation of global data operations of digital suppliers, thereby violating obligations on domestic regulation in GATS art. VI. For instance, governments can implement such measures unreasonably or unfairly to deliberately harm the commercial interests of foreign players, including sharing their vital technical information with domestic competitors.Footnote 47

Additionally, in rare scenarios, countries may implement extreme measures banning a certain kind of data-driven technology to prevent abuse of human rights. For example, given the potential dangers and abuses of facial recognition technology, a government could potentially ban commercial software facilitating facial recognition, especially from foreign companies. Such measures may be in conflict with obligations on market access and non-discrimination under GATS.

III Defending Data Ethics-Related Measures under GATS General Exception

Although data ethics-related measures can violate obligations contained in WTO treaties, governments can argue that they protect vital public interests, including protecting privacy and addressing other ethical concerns regarding the processing and sharing of data, under the general exceptions contained in GATS art. XIV. While a significant amount of scholarship has discussed the justification of privacy laws under GATS art. XIV(c)(ii),Footnote 48 the role of GATS art. XIV(a) (the public morals/public order exception) in facilitating other public interests related to data ethics such as protecting against discrimination, facilitating technical robustness and security of technologies, and ensuring appropriate online content moderation remains unexplored. Therefore, after highlighting the relevance of GATS art. XIV(c)(ii) and GATS art. XIV(a) in justifying data ethics-related measures in subsection A of this section, subsection B focuses on how GATS art. XIV(a) applies to data ethics-related measures. Finally, subsection C discusses the various possibilities and challenges involved in accommodating data ethics-related measures within the WTO/GATS framework.

This section argues that GATS art. XIV can play a role in preserving the policy space necessary for members to impose data ethics-related measures. For instance, under GATS art. XIV(c)(ii), members may argue that certain data ethics-related measures are necessary to achieve compliance with domestic laws, especially data protection/privacy laws. Similarly, under the public morals/public order exception in GATS art. XIV(a), panels have generally interpreted ‘public morals’ broadly in line with domestic values/culture; thus, data ethics-related measures can generally qualify under GATS art. XIV(a). However, to ensure a holistic assessment under GATS art. XIV, panels must adopt a cautious, well-reasoned and coherent standard of review in evaluating the necessity of data ethics-related measures under GATS art. XIV. This would entail panels considering both the possibility of accommodating data ethics principles within the GATS framework (e.g. through a meaningful interpretation and application of the exception) and the challenge of balancing (often conflicting) domestic and international perspectives on data governance (e.g. in conducting a holistic weighing and balancing test on the various regulatory means adopted to achieve a data ethics-related policy objective). The ability of the WTO to remain open to relevant policy and technological developments related to data-driven technologies (including relevant multi-stakeholder/transnational norms and standards) will be crucial in ensuring that the GATS framework can support genuine and legitimate data ethics-related measures.

A Applying General Exceptions to Justify Data Ethics-Related Measures
1 Relevance of GATS Art. XIV(c)(ii)

GATS art. XIV(c)(ii) is likely to be relevant in justifying data ethics-related measures aimed at protecting individual privacy. Under GATS XIV(c)(ii), a measure violating GATS obligations can be justified if: (a) it is implemented to secure compliance with domestic ‘laws and regulations’,Footnote 49 including those ‘relat[ing] to’ (ii) the protection of the privacy of individuals in relation to the processing and dissemination of personal data and the protection of confidentiality of individual records and accounts; (b) the above ‘laws and regulations’ are consistent with WTO law; and (c) the measure is necessary to secure compliance with these laws and regulations.Footnote 50

GATS art. XIV(c)(ii) can be interpreted in an evolutionary manner to cover privacy concerns.Footnote 51 For instance, ‘protection of privacy of individuals’ in GATS art. XIV(c)(ii) could potentially cover measures preventing unauthorised online surveillance of individuals or indiscriminate use of personal data by companies without informed user content. Similarly, data processing outside one’s borders may be restricted to prohibit illegal third-party use of personal data. Under GATS art. XIV(c)(ii), members must also demonstrate that the domestic law the measure seeks to achieve compliance with should be consistent with WTO law. While privacy laws are not per se inconsistent with WTO law, certain elements such as discriminatory or ambiguous conditions for cross-border data transfers may violate WTO law.Footnote 52 Group privacy concerns arguably do not fall under this exception as deidentified/anonymised data is not generally considered ‘personal data’, although this data can be used to discriminate against specific groups of individuals. These concerns are more likely to be addressed under GATS art. XIV(a), as discussed next.

2 Relevance of GATS Art. XIV(a)

When data ethics-related measures do not specifically relate to personal privacy or achieving compliance with other domestic laws, they are more likely to be justified under GATS art. XIV(a) that allows measures: (a) necessary to protect public morals or to maintain public order. The public order exception may be invoked only where a genuine and sufficiently serious threat is posed to one of the fundamental interests of society. Further, members may rely on GATS art. XIV(a) in addition to GATS art. XIV(c)(ii) in justifying their data ethics-related measures.

The terms ‘public morals’ and ‘public order’ are distinct. However, panels have generally taken the view that ‘to the extent that both concepts seek to protect largely similar values, some overlap may exist’.Footnote 53 ‘Public order’ is defined as ‘a genuine and sufficiently serious threat’ to ‘one of the fundamental interests of society’.Footnote 54 Public morals is an undefined term; therefore, panels could theoretically interpret public morals with reference to international norms or the domestic values/culture of the country or both. Although this conflict between international/universal values and domestic values remains debatable,Footnote 55 WTO tribunals have generally shown an inclination to consider local values in determining the meaning of ‘public morals’. In fact, in the US – Gambling dispute, the panel held that ‘public morals’ in GATS art. XIV(a) ‘denotes standards of right and wrong conduct maintained by or on behalf of a community or nation’, and such standards ‘can vary in time and space, depending upon a range of factors, including prevailing social, cultural, ethical and religious values’.Footnote 56

The WTO tribunals have generally applied GATS art. XIV(a) in a broad, flexible and evolutionary manner.Footnote 57 For instance, in China – Publications and Audiovisual Products, the Appellate Body (AB) held that censorship of printed and digital content fell within the scope of ‘public morals’ in GATS art. XIV(a).Footnote 58 In US – Gambling, ‘public morals’ was interpreted to cover public morals and public order concerns related to online gambling (including money laundering).Footnote 59 In EC – Seals, the AB held that the term ‘public morals’ covered animal welfare concerns.Footnote 60 In Brazil – Taxation, the panel held that a measure imposed to bridge the digital divide and promote social inclusion in Brazil fell within the scope of ‘public morals’.Footnote 61 In Colombia – Textiles, the panel held that a domestic tariff intended to combat money laundering in Colombia fell within the scope of ‘public morals’.Footnote 62

Governments have significant freedom in deciding how to define and achieve public morality and public order. In EC – Seals, the panel identified two steps in assessing measures under the public morals exception: first, if the stated policy concern actually existed in the society and, second, if it fell within the scope of ‘public morals’.Footnote 63 However, in the same dispute, the AB held that it is not necessary for the tribunal to identify the existence of a specific risk to public moralsFootnote 64 or identify the exact content of public morals at issue (thus implying that variations of public morals exist depending on the member’s values).Footnote 65 Further, members have the right to set different levels of protection to address identical moral concerns.Footnote 66 Arguably, a similar standard of review may apply when members impose measures necessary for maintaining public order. Although the requirement of a genuine and serious threat is a high threshold, members are likely to have sufficient discretion in determining the fundamental interest of their society. For instance, a member desiring to control the domestic internet activities of their residents could argue that restricting data transfers/processing is required for maintaining ‘public order’.

Given the flexible interpretation of GATS art. XIV(a), data ethics-related measures are likely to fall within the scope of this provision. First, governments could argue that algorithmic accountability and ethical design are important elements of domestic public policy such as protecting social order and protecting consumers from harm. Second, the adoption of a human rights-centric approach can be a defining element of a society’s public morals and constitute a fundamental public interest. For example, in order to protect minority groups from algorithmic discrimination, a government must be able to scrutinise the algorithms/source code, thereby qualifying under ‘public morals’ and ‘public order’. Third, privacy is considered to be a ‘moral’ issue in many societies because of its connection with socio-cultural and religious values.Footnote 67 For example, sexual preferences and religious affiliation are considered highly intimate information in many societies. Finally, certain governments may argue that their data ethics-related measures are connected to human rights recognised in international instruments and declarations of the international policy community on data governance.Footnote 68 While panels are unlikely to accept public morals or public order exception as a basis for enforcing international human rights,Footnote 69 they are likely to attempt to interpret GATS art. XIV(a) in a manner that respects human rights and international public policy.

B Applying the Public Morals/Public Order Exception to Data Ethics-Related Measures

If a data ethics-related measure qualifies under GATS art. XIV(a) or GATS art. XIV(c)(ii), the panel must examine its necessity to achieve the underlying policy objective under a ‘weighing and balancing test’. This subsection focuses on the necessity of data ethics-related measures to protect public morals or maintain public order in accordance with the ‘weighing and balancing test’.

The first step in this test is assessing the contribution of the measure to the policy objective under GATS art. XIV – that is, the nexus between the measure and the policy objective under GATS art. XIVFootnote 70 – for instance, by looking at the design, content, structure and expected operation of the data ethics-related measure.Footnote 71 For example, if a member requires companies to provide their source code or algorithms to verify them for bias (e.g. discriminating against minorities) or other privacy loopholes (particularly, group privacy concerns), the panel will examine if this requirement contributes to protecting public morals or maintaining public order under GATS art. XIV(a). From a technological perspective, this assessment can be difficult as the efficacy of transparency/disclosure of algorithms and source code to understand the underlying logic and discriminatory outcomes in algorithmic decision-making remains debatable.Footnote 72 For complex AI, such disclosure requirements can also be counterproductive; for example, in autonomous vehicles, requiring access to the algorithms could compromise the security of the digital technologies. As explainability of algorithms improves with technological developments (especially the development of explainable AI or XAI), panels can make better assessments by seeking additional expert technical evidence on relevant issues.

Similarly, questions may arise regarding whether restrictions on cross-border data flows contribute to achieving the key principles of data ethics. Several studies indicate that severe restrictions on data flows are generally ineffective in enhancing the privacy or security of data-driven technologies.Footnote 73 Similarly, locating data within one’s borders does not automatically increase control or access to data. To the contrary, such measures increase the possibility of unauthorised surveillance and violation of human rights as well as interfering with the development of a healthy and competitive domestic digital market, especially when few companies (potentially state-controlled) own all the domestic data centres. However, easy access to local data servers may facilitate easier regulatory enforcement (e.g. pursuing action against companies that fail to comply with data ethics-related measures).

To facilitate a higher standard of data ethics, members may impose domestic regulations requiring technology companies to comply with internationally recognised technical standards, or adopt designs that protect privacy and security by default and/or use certification mechanisms to verify compliance with these ethical design requirements.Footnote 74 In comparison to blatant cross-border data transfer restrictions, these requirements appear more effective in facilitating digital inclusion, preventing disinformation campaigns and ensuring technologically robust solutions. Therefore, such measures are more likely to contribute to protecting public morals and maintaining public order.

The next step under the weighing and balancing test is assessing the trade-restrictiveness of the data ethics-related measure; that is, the restrictive impact of the measure on international commerce.Footnote 75 This step involves an assessment not only of the sector affected directly by the measure but also other sectors. For example, as data-driven services are used across several industries, restrictions on cloud computing services (e.g. mandatory compliance with domestic technical standards or data/security certifications) can potentially impact several sectors.Footnote 76

Finally, in applying the weighing and balancing test, panels will take into account any alternative less trade-restrictive measures proposed by the complainant. The key factors examined are whether such alternatives are reasonably available to and feasible to implement.Footnote 77 Further, any proposed alternatives must achieve an equivalent level of protection of the stated policy objective as the imposed measure.Footnote 78 With regard to regulating certain aspects of the digital sector, self-regulatory (or market-driven) approaches may be more effective and efficient than highly prescriptive laws and regulations.Footnote 79 For example, rather than imposing specific technical standards, competitive standards developed by the industry in sectors such as AI are more likely to be transparent and secure. Similarly, instead of restricting data-driven technologies through unreasonable regulations on data processing, countries could recognise market-driven verification mechanisms that certify compliance with robust standards on ethical design.

Despite the growing popularity of these market-driven mechanisms, panels are likely to consider them as, at best, complementary measures rather than alternatives to prescriptive laws and regulations.Footnote 80 This is because countries may be concerned about the robustness of the representativeness of private/multi-stakeholder standards, especially when developed without sufficient government oversight.Footnote 81 This would be the case even if the private/multi-stakeholder standards are robust and generally considered industry best practices. Further, verification/certification mechanisms could be very difficult and expensive for developing countries to adopt and monitor and thus not feasible. Therefore, at least in the current scenario, most market-driven or self-regulatory alternatives to data ethics-related measures are likely to fail to satisfy the threshold in GATS art. XIV. The same argument could also be made for technological mechanisms to ensure greater algorithmic accountability (as discussed in subsection A, Section II). In such cases, panels are likely to find more prescriptive measures such as mandatory disclosure of source code/algorithms compliant with GATS art. XIV.

If a trade-restrictive measure provisionally satisfies the necessity test under GATS art. XIV(a), it must further be consistent with the chapeau:

Subject to the requirement that such measures are not applied in a manner which would constitute a means of arbitrary or unjustifiable discrimination between countries where like conditions prevail, or a disguised restriction on international trade in services, nothing in this Agreement shall be construed to prevent the adoption or enforcement by any Member of measures.

The chapeau prevents members from abusing exceptions contained in the subsections of GATS art. XIV and ensures that members implement all measures in good faith.Footnote 82 It requires an enquiry into the ‘design, architecture, and revealing structure of a measure’Footnote 83 to assess if the measure violates the GATS art. XIV chapeau in ‘its actual or expected application’.Footnote 84 For example, if a measure deliberately prohibits foreign service suppliers from obtaining licences or authorisations to provide their services on grounds that their algorithms or technical standards do not meet the adequate threshold (irrespective of the quality and robustness of the standard/algorithms), then it might be inconsistent with the GATS art. XIV chapeau. Another example of a potential violation is, when governments illegally share vital technical information regarding foreign digital technologies with domestic competitors, making it harder for foreign companies to compete in that market and further causing potential intellectual property losses.

C Data Ethics and International Trade Law: Possibilities and Challenges

The previous subsections indicate that although GATS art. XIV can justify data ethics-related measures, several questions remain unanswered regarding the extent to which GATS art. XIV provides sufficient policy space for members to impose data ethics-related measures. For instance, should panels place any limits in defining ‘public morals’ or ‘public order’ under GATS art. XIV(a) in accommodating data ethics concerns? Given the technological and policy uncertainty, what standard of review should panels adopt under GATS art. XIV in reviewing data ethics-related measures? Should panels be completely deferential to the risk assessment made by governments in relation to their data ethics-related measures or should they conduct a more substantive assessment? What tools should the panels use in this assessment? How will the growth of new technological mechanisms such as XAI or market-driven standards and verification mechanisms impact the assessment of data ethics-related measures under GATS art. XIV?

Data ethics-related measures are typically nuanced in nature. To understand these measures holistically, governments must focus on both their legal/policy implications and technological impact. Thus, in assessing data ethics-related measures under GATS, panels must follow a well-reasoned, cautious and coherent standard of review that looks at both the technological and legal evidence. However, given the limited technical expertise of panels, they should refrain from engaging in a de novo review of data ethics-related measures and cautiously use technical expert opinions.

In applying this standard of review, two routes are possible. First, in assessing whether certain data ethics-related measures relate to GATS art. XIV, panels can, in addition to considering local values and policy preferences of members, pay regard to developments in the international/multi-stakeholder policy community on data governance. This route is not entirely unrealistic given that data ethics issues implicate several transnational policy concerns and not just domestic concerns. Further, such an approach is also helpful given the critical role of multi-stakeholder institutions in promoting data ethics, as discussed in subsection A, Section II. In Brazil – Taxation, for instance, the panel considered not only the importance of the digital divide as a domestic policy objective within Brazil, but also discussed its relationship with the Millennium Development Goals.Footnote 85 However, this route is politically and legally challenging in circumstances where local values conflict with international/multi-stakeholder norms. WTO tribunals do not have the capacity or mandate to determine the appropriate data ethics frameworks for individual members. Therefore, if a country considers that certain international/multi-stakeholder norms are not aligned with its policy preferences, trade tribunals must not interfere, even when those international/multi-stakeholder norms can lead to better outcomes for data ethics. This limitation, however, may lead to scepticism towards the WTO; that is, the panels cannot make decisions that clearly support a human rights-centric approach in data governance.

The second route is adopting a more stringent weighing and balancing test in assessing data ethics-related measures under GATS art. XIV(a).Footnote 86 The necessity test can be effective in detecting discriminatory or unnecessarily trade-restrictive measures.Footnote 87 For example, looking at the technical aspect of the measure (i.e. inviting expert evidence on whether a data ethics-related measure is actually capable of achieving important policy goals) is less controversial than examining the moral elements of the measure, which often implicates sensitive political or cultural questions. This approach, however, does not necessarily allow panels to consider innovations in the digital sector such as the potential role of technological mechanisms in the verification of data-driven technologies. For instance, engineers and computer scientists designing data-driven services can build ex ante verification mechanisms that ensure that the program/algorithm meets the specifications in domestic laws and processes.Footnote 88 Panels are unlikely to consider such mechanisms as a viable less trade-restrictive alternative under GATS art. XIV, especially when the defendant governments do not consider them as effective as regulatory access to source code/algorithms. Similarly, panels are unlikely to consider strict scrutiny/audits of training data by the private companies themselves a fool-proof mechanism to ensure fair and transparent outcomes in algorithmic decision-making, especially when governments restrict automated decision-making in risky and sensitive sectors.Footnote 89 However, as such market-based, technological mechanisms become more fit-for-purpose and reliable, they could be considered as more viable and qualify as potential candidates as less trade-restrictive alternatives under GATS art. XIV. Such mechanisms are also likely to be considered credible if they are developed and implemented by the private sector in collaboration with regulatory bodies, especially for countries with sufficient resources to hold private companies accountable for their poor data ethics practices.Footnote 90

In the long run, the WTO needs to respond to the predominantly decentralised nature of data governance. For example, the WTO needs to adopt new rules and institutional mechanisms that allow collaboration between governments, technology companies and relevant multi-stakeholder or transnational organisations dealing with data governance. An important example in this regard is the development of technical standards on AI software by the private sector. Currently, GATS does not provide sufficient room for such standards for services.Footnote 91 However, at domestic/regional levels, several governments are coordinating with the private sector on certain aspects of data governance such as development of AI standards. These multi-stakeholder mechanisms could eventually grow transnationally (especially among like-minded countries) and can be facilitated through WTO committees. Eventually, such a broad-based approach could ensure that the WTO plays a more meaningful role in promoting good global data ethics practices and robust digital technologies.

IV Conclusion

This chapter investigated whether the general exceptions in GATS provide adequate policy space to governments to impose data ethics-related measures. In evaluating data ethics-related measures under GATS art. XIV, panels can take into account both international norms and best practices as well as local values or socio-cultural preferences, especially if they are aligned with each other. This chapter also demonstrates that panels can adopt a well-reasoned, cautious and coherent standard of review in assessing the necessity of data ethics-related measures under GATS art. XIV by holistically looking at both legal and technological evidence in each step of the weighing and balancing test. However, the possibility of panels considering a wider range of private sector-driven or multi-stakeholder mechanisms as alternatives to prescriptive data ethics-related measures, especially new verification technologies and technical standards, currently remains limited. Therefore, moving forward, the WTO framework must better co-opt international/multi-stakeholder norms and standards applicable to data-driven services so as to remain more open and responsive to the dynamic policy developments in data governance.

14 Disciplining Artificial Intelligence Policies World Trade Organization Law as a Sword and a Shield

Kelly K. Shang and Rachel R. Du Footnote *
I Introduction

The rapid development of artificial intelligence (AI) technology has brought to humanity benefits and challenges. The potential risk for AI technology to be used for controversial purposes, and the need for the international community to develop disciplines on the use of AI, are noticed by many. For example, in May 2019, the Secretary-General of the United Nations (UN) denounced AI-powered “lethal autonomous weapons” as “politically unacceptable [and] morally repugnant”, and called for such weapons to be “prohibited by international law”.Footnote 1 In November 2019, a US Congressional Research Service (CRS) report identified the risks of AI applications being used in surveillance and reconnaissance applications, in autonomous weapon systems,Footnote 2 or to serve “dual-use” purposes.Footnote 3 In February 2020, a European Union (EU) White Paper on AI identified that the use of AI could affect, inter alia, “fundamental rights, including the rights to freedom of expression[,] non-discrimination … [and the] protection of personal data”.Footnote 4

In addition to national security or fundamental rights concerns, the theme of “fair competition” in developing of AI products causes further controversies. For example, the 2019 CRS report on AI, while alluding to China’s Military-Civil Fusion policy, cautioned that some “US competitors may have fewer moral, legal, or ethical qualms” about the development of certain AI applications.

Suggestions and proposals have been made by entities including the EU, the G-20Footnote 5 and the Organisation for Economic Co-operation and Development (OECD)Footnote 6 for the international community to develop new disciplines in regulating the development and use of AI technologies.Footnote 7 However, no binding rules seem to have been reached on an international level at this stage.Footnote 8

In certain areas, states are bound by their existing international law obligations when shaping their AI policies. For instance, AI policies concerning face-recognition cameras need to comply with the various international obligations prescribed in (inter alia) the International Covenant on Civil and Political Rights (ICCPR). Similarly, AI policies seeking to undermine the national security of other states also must comply with (inter alia) the principle of non-intervention in internal affairs as a general principle of international law.

At present, the primary deterrence for trade powers from abusing AI technology is perhaps the unilateral economic sanctionsFootnote 9 taken by states in an individual or collective manner (AI sanctions). Occasionally, such sanctions are criticised for breaching the sanctioners’ commitments under the World Trade Organization (WTO).Footnote 10

This chapter aims to examine the relationship between the current WTO law and the controversial use of AI policies. In particular, it examines the following questions: (a) whether WTO law can sufficiently regulate “data-sharing” policies that seek to promote the development of AI technologies; and (b) whether WTO law can justify sanctions against other WTO members for their controversial use of AI technologies, especially those seeking to undermine fundamental rights or national security.

A preliminary comment needs to be made at this stage: this chapter does not seek to set out legal or ethical “tests” to judge what kind of AI policies are “controversial”, nor does it seek to pronounce any specific AI policy as such. No universal legal or ethical guideline concerning the development or use of AI seems to have been reached so far, possibly because of the significant cultural and ideological differences among major AI powers.

The structure of this chapter is as follows. Section II reviews major types of controversial AI policies among the trade powers, and provides an overview of the international responses to such controversial uses. Section III considers whether current WTO disciplines can sufficiently regulate “data-sharing” policies for the development of AI technologies. Section IV turns to examine whether WTO law can justify sanctions against other WTO members for their controversial use of AI technologies. Section V summarises and concludes this chapter.

II Current Use and International Response to Controversial Artificial Intelligence Policies
A Major Controversies Concerning Artificial Intelligence Policies

Major controversies among trade powers on AI policies are manifested in two ways. The first way concerns the development of AI systems. Specifically, a country may use state power to collect personal data and “feed” them to their AI industry, or alternatively encourage the “shared use” of personal data across government and private sectors.Footnote 11 For example, China’s “military-civil fusion” policy seeks to promote (if not require) data-sharing between its commercial companies and its government,Footnote 12 apparently with the aim of “creating [at a lower cost] the large databases on which AI systems train”.Footnote 13

The second way concerns the use of AI systems. Specifically, AI policies can be used to undermine fundamental rights, either within the WTO member in question itself or within other members,Footnote 14 in order to pursue such policy objectives including domestic surveillance, legal enforcement or international espionage. Further, AI policies can be pursued to undermine the national security of other members, including espionage and manipulation of another member’s domestic politics such as elections.Footnote 15

B International Response to Predatory Artificial Intelligence Policies

The potential risks of AI policies have, in recent years, attracted increasing international attention. For example, in 2019, the UN Secretary-General called for international collaborations to “address the risks [of AI and] to develop the frameworks and systems that enable responsible innovation”.Footnote 16

In achieving such a goal, the Secretary-General called for an international regulatory system to be developed for the “responsible innovation” on AI, with “binding laws and instruments” in place.Footnote 17 In addition to this suggested path, WTO members may also decide to take collective or individual countermeasures as a deterrence against other states which, under their judgement, maintain problematic AI policies.

In practice, the primary deterrence against problematic AI policies appears to be unilateral economic sanctions. Such uses are piecemeal: as an example of AI sanctions targeted against human rights abuses, consider the USA’s imposition of Magnitsky sanctions in July 2020 against certain Chinese government individuals and entities that (according to the USA) used AI platforms “for racial profiling” and “data-driven surveillance” against ethnic minorities.Footnote 18 Also, consider the call in June 2020 by the European Parliament for “the EU … and the international community … [to impose] appropriate export control mechanisms including cyber surveillance items to deny China, and in particular Hong Kong, access to technologies used to violate basic rights”.Footnote 19

Sanctions may also be used to restrict AI-powered computer programs that act as surveillance and propaganda instruments for foreign countries. The US Secretary of State’s statement in July 2020 for a possible ban on China’s TikTok app, which apparently uses an AI‑powered algorithm for “censorship and surveillance”, can serve as an example.Footnote 20

C Sanctions on Artificial Intelligence-Powered Goods/Services and World Trade Organization Law

From the perspective of international trade law, it appears that AI sanctions can take at least two forms. First, a sanction may take the form of an import restriction, possibly with the aim of preventing the sanctionee’s problematic AI technology from being in contact with the sanctioner: the USA’s proposed restriction against TikTok being installed on US mobile phones could be an example.

Second, a sanction may also take the form of an export barrier: examples of these measures include the USA’s restriction against China over its use of AI for “racial profiling” and “data-driven surveillance”, and the European Parliament’s proposed sanctions against China and Hong Kong. Specifically, such sanctions can either be used aggressively with the aim of terminating a (perceived) predatory AI policy (such as by cutting off the “raw materials” supply), or defensively as a measure to protect the sentiment of the invoking member’s own citizens as “abetters” of the problematic policy in question.

Sanctionees frequently argue that the sanctions they encounter are against WTO rules.Footnote 21 Some scholars seem to hold similar views: Lester and Zhu questioned the WTO consistency of the Trump administration’s expansive use of trade barriers on national security grounds.Footnote 22 In the context of trade restrictions to address data security or foreign influence concerns, Zhou and Kong argued that Australia’s Huawei ban is “unjustifiable under the WTO”.Footnote 23 Similarly, Voon argued that Australia would “face significant challenges” if China were to lodge a WTO complaint against Australia’s Huawei ban.Footnote 24 While the Huawei controversies primarily involve national security concerns on 5G networks, it would seem that similar arguments could be advanced against sanctions on AI products that threaten national security.

Can the trade liberalisation commitments undertaken by WTO members restrict their ability to impose AI sanctions to safeguard fundamental rights or national security? Indeed, it would seem that if a member were to impose “sanctions” in the forms of import or export restrictions on goods or services, it might prima facie contravene its obligations to offer most favoured nation (MFN) treatment (General Agreement on Tariffs and Trade (GATT) Art. I, General Agreement on Trade in Services (GATS) Art. II) and national treatment (NT) (GATT Art. III:1, GATS Art. XVII – provided specific commitments were made), as well as the general obligations to eliminate quantitative restrictions on goods (GATT Art. XI) or the market access obligations for services (GATS Art. XVI – provided specific commitments were made). Accordingly, the centre of the argument concerning the WTO consistency of AI-related sanctions would be the availability of justifications.

This chapter uses the following roadmap in assessing the relationship between predatory AI policies and WTO law. First, it considers whether certain AI policies, especially those promoting “data-sharing” mechanisms between government and private AI firms, can be challenged under WTO law. Second, it considers whether sanctions against controversial AI policies are consistent with WTO law. In doing so, this chapter examines in turn: (a) whether such sanctions contravene non-discriminatory obligations under WTO law; (b) whether “public morals” exceptions are available to such sanctions; (c) whether security exceptions are available to such sanctions; and (d) whether “international peace and security” exceptions are available to such sanctions.

III Disciplining “Data-Sharing” Mechanisms: World Trade Organization Law as a Sword?

As stated earlier, state-operated “data-sharing” mechanisms, through which a government “feeds” data to its private entities for their development of AI products, is potentially controversial for distorting fair competition. This chapter now turns to examine whether such mechanisms can constitute an actionable subsidy under the Agreement on Subsidies and Countervailing Measures (SCM Agreement).

A Are “Data-Sharing” Mechanisms Subsidies?

The general principle in determining actionable subsidies is well established. A measure constitutes an actionable subsidy if (a) it is a subsidy, (b) it is “specific” and (c) its use causes “adverse effects”.Footnote 25 A subsidy exists when (a) there is a financial contribution provided by a government or any public body and (b) such a financial contribution confers a benefit.Footnote 26

1 Do “Data-Sharing” Mechanisms Provide a “Financial Contribution”?

First, the Appellate Body in US–Softwood Lumber IV (2004) observed that “the term of ‘financial contribution’ has a wide definition as the transfer of something of economic value”.Footnote 27 Scholars further argued that data is a “substantial intangible asset”Footnote 28 that can “itself be traded”,Footnote 29 or alternatively be seen as capital “for value creation”.Footnote 30 In practice, data is sold by some governments for profits.Footnote 31 Accordingly, the provision of data would clearly constitute a “financial contribution”.

Furthermore, in determining the existence of a “financial contribution”, a government conduct must fall under one of the four types of manifestations described in subparagraphs (i)–(iv) of Art. 1.1(a)(1) of the SCM Agreement.Footnote 32 Most notably, Art. 1.1(a)(1)(iii) stipulates that:

[A subsidy shall be deemed to exist if …] a government provides goods or services other than general infrastructure, or purchases goods[.]

Accordingly, a “financial contribution” falling under Art. 1.1(a)(1)(iii) exists if (a) there is a “good or service” (b) “provided” by a government and (c) the goods/services provided are “other than general infrastructure”.

WTO jurisprudence appears to construe the concept “goods or services” broadly to include all non-monetary resources. In US–Softwood Lumber IV (2004), the Appellate Body ruled that Art. 1.1(a)(1)(iii) aims to prevent the circumvention of subsidy disciplines in cases of financial contributions granted in a form other than money.Footnote 33

Case law further shows that the “goods or services” requirement would be satisfied if the resource provided is non-monetary, without requiring a panel or the Appellate Body to distinguish whether the resources in question are “goods” or “services”. For instance, in US–Large Civil Aircraft (2nd complaint), the Appellate Body commented that shared “scientific information” and “rights over data” are provisions of “non-monetary resources”,Footnote 34 without specifying whether they are goods or services. Similarly, the Appellate Body in the same dispute ruled that the grant of access to NASA employees constitutes the provision of “goods or services”.Footnote 35

Turning to consider the meaning of “provides”, the Appellate Body ruled that the ordinary meaning of such a term is “supply or furnish for use; make available”,Footnote 36 and that “provide” does not necessarily need to be gratuitous.Footnote 37

As for the meaning of “other than general infrastructure”, the panel in EC and Certain Member States–Large Civil Aircraft (2011) defined “general infrastructure” as “[i]nfrastructure that is not provided to or for the advantage of only a single entity or limited group of entities, but rather is available to all or nearly all entities”.Footnote 38 The panel in the same case further held that such an assessment is stringent, involving any related factors including “the circumstances surrounding the creation of the infrastructure in question … the recipients or beneficiaries of the infrastructure”.Footnote 39

Applying the case law summarised here to the present enquiry, the following observations can be made: first, even assuming that data may not be easily categorised as “goods” or “services”, the fact that data is a non-monetary resource is already sufficient to ensure that it falls under the general scope of “goods or services”.Footnote 40 Second, even if a data-sharing mechanism may involve a bilateral exchange of data between a government and its private sector, such a mechanism still involves the provision of data, as a part of such a mechanism involves the “supply or furnish” of data by a government to its private sectors. Third, such a data-“sharing” mechanism will not qualify as general infrastructure if such a mechanism is created and designed specifically for AI firms, which are usually a small number of monopolies;Footnote 41 the beneficiaries are therefore quite limited. Moreover, some data is not likely to fall within the scope of so-called public information/data, whereas it is useful for AI training, for example, medical records and ID photos. If these kinds of data are shared, the data-sharing mechanism also cannot be justified as “general infrastructure”. Accordingly, it is likely that a “data-sharing” AI policy will constitute a “financial contribution” that falls within the scope of Art. 1.1(a)(1)(iii) of the SCM Agreement.

2 Does the Financial Contribution Confer a Benefit?

Case law stipulates that the conferral of benefit “should be determined by assessing whether the recipient has received a ‘financial contribution’ on terms more favourable than those available to the recipient in the market”.Footnote 42

Turning to the present issue of data-sharing mechanisms, note that a government operating a data-sharing mechanism is highly likely to have access to a larger pool of data than private enterprises can obtain by themselves under market conditions. Furthermore, certain governments may have access to confidential data that they have extracted through state power. Accordingly, the provision of such a data pool can confer the recipients crucial “raw materials” that cannot be easily obtained, and thereby confers them a stronger position in the market. Accordingly, such a financial contribution would confer a benefit under the meaning of SCM Art. 1.1(b); assuming that a granting authority is a “government or any public body”,Footnote 43 a data-sharing mechanism would constitute a subsidy.

B Do “Data-Sharing” Mechanisms Meet the Standard of “Specificity”?

The examination of “specificity” largely depends on the facts of a particular case; it is difficult to make general pronunciations in abstract. However, a shared “data pool”, being a highly technical mechanism, perhaps can only be meaningfully used by the AI industry. If this is so, then it is likely that a data-sharing mechanism would be specific to “certain enterprises” and not “broadly available and widely used throughout an economy”.Footnote 44

Further, considering that fact that a data-sharing platform designed for development of AI constitutes “a subsidy programme which is mainly used by certain enterprises”,Footnote 45 it is likely that a “data-sharing” mechanism can (at least)Footnote 46 constitute de facto specificity under the meaning of SCM Art 2.1(c).

In the light of this analysis, a “data subsidy” is highly like to meet the standard of specificity pursuant to the SCM Agreement.

C Do “Data-Sharing” Mechanisms Have Adverse Effects?

An examination of the adverse effects of a subsidy largely depends on the specific facts of an actual case; it is difficult to make general pronouncements concerning “data-sharing” mechanisms in abstract. However, a “data subsidy” has the potential of reducing the cost of collecting data for “training” AI systems, thus allowing commercial firms to cut the price of their AI products for exportation. This is likely to constitute “significant price undercutting” under the meaning of SCM Art. 6.3(c). As such, it is possible that a “data subsidy” will have adverse effects pursuant to Arts 5(c) and 6.3 of the SCM Agreement.

Summarising these discussions, it can be concluded that an AI policy involving a “data-sharing” mechanism is likely to constitute an actionable subsidy, under the meaning of the SCM Agreement. Consequently, injured members would be entitled to impose countervailing duties against AI products (such as AI-powered robots or vehicles) that are subsidised by “data-sharing” mechanisms.

IV Disciplining Artificial Intelligence Policies: World Trade Organization Rules as a Shield?

This chapter now proceeds to consider whether an AI sanction, being an import or export restriction aimed to address other members’ AI policies that undermine fundamental rights or national security (such as the proposed EU export control for cyber surveillance items against Hong Kong), would be consistent with WTO law. At the outset, it should be noted that some AI sanctions may not contravene the non-discriminatory obligations under the WTO law in the first place, since AI products that “do” and “do not” undermine such values may not satisfy the “likeness test” because of different consumer habits and preferences.

A Availability of a “Public Moral” Defence

Assuming that an AI sanction does prima facie contravene WTO rules (such as MFN/NT, general elimination of quantitative restrictions or market access obligations), this chapter now proceeds to consider whether such a sanction may be justified under the “public moral exceptions”, especially Art. XX(a) of the GATT 1994 and Art. XIV(a) of the GATS.

1 Summary of Existing Case Law

The law pertaining to public moral exceptions is well settled. Using Art. XX(a) of the GATT 1994 as an example (as the position of GATS Art. XIV(a) is similar), the invocation of such a justification involves a two-tier test: a measure must “first be provisionally justified under [Art. XX(a)], before it is subsequently appraised under the chapeau of Article XX”.Footnote 47 In satisfying Art. XX(a), a member must demonstrate that its measure (a) was adopted or enforcedFootnote 48 “to protect public morals”, and (b) is “necessary” to protect such public morals.Footnote 49 The enquiry then proceeds to the chapeau of Art. XX, which probes whether the application of a measure constitutes “arbitrary or unjustifiable discrimination” or “disguised restriction of international trade”.

It is well settled that “public morals” is defined as “standards of right and wrong conduct maintained by or on behalf of a community or nation”.Footnote 50 Panels and the Appellate Body have further given a considerable degree of deference to the members to “define and apply for themselves the concept of public morals according to their own systems and scales of values”.Footnote 51

The constituent test of “public morals” in GATT Art. XX(a) is represented by the panel report in EC–Seal Products (2014), in which a two-tier test was prescribed to examine:Footnote 52

first, whether the [public morals] concern … indeed exists in that society; and, second, whether such concern falls within the scope of “public morals” as “defined and applied” by a regulating Member “in its territory, according to its own systems and scales of values”.

With regard to the first element, the panel considered the EU measure’s text,Footnote 53 legislative history,Footnote 54 and structure and design; it also considered (although to a limited extent) the result of a public survey.Footnote 55

With regard to the second element, the panel considered the legislative history of the EU measure under challenge,Footnote 56 the ethical/moral references concerning seal welfare in EU law,Footnote 57 the domestic law of certain EU countriesFootnote 58 and certain recommendations from international organisations.Footnote 59

2 Availability of the Defence

Applying the law summarised in the previous subsection to the present discussion on AI sanctions, the following observations can be made. First, concerns relating to fundamental rights or national security are very likely to exist in the sanctioner’s society, and indeed perhaps in any major society in the world. Second, fundamental rights or national security are very likely to fall within the scope of “public morals” within the sanctioner’s society; in practice, the sanctioner may refer to documents such as its constitutional legislations or parliamentary records to show that its concerns are genuinely held.

Accordingly, assuming that other requirements for a “public morals” defence (such as the “necessity” test and the tests under the GATT Art. XX/GATS Art. XIV chapeau) are satisfied, an AI sanction would be successfully defended under the “public morals” exceptions. In sum, it appears that the “public morals” exceptions are, perhaps in a way similar to that in EC–Seal Products (2014), capable of justifying AI sanctions genuinely held to address a concern of national security or fundamental rights.

B Availability of a Defence under the “Security Exception”

This chapter now proceeds to consider whether a WTO member may seek to justify an AI sanction relating to the protection of national security of fundamental rights under the “security exception”, especially Art. XXI(b)(iii) of the GATT 1994 or Art. XIV bis of the GATS.

1 Summary of Existing Case Law

The panel report in Russia–Traffic in Transit (2019)Footnote 60 is currently the leading case law concerning security exceptions. In essence, it ruled (in the context of Art. XXI(b)(iii) of the GATT 1994) that (a) in general, it is left to every member to define, on its own subjective standards, what it considers to be its essential security interests,Footnote 61 although the exercise of such a liberty must be subject to the “obligation of good faith”;Footnote 62 and (b) it is for the panels and the Appellate Body to determine objectively whether an action taken in time of an emergency in international relations is “subject to objective determination”.Footnote 63

Given that the subjective tests set out here are relatively easily met, it would appear that, in determining the availability of a security exception, the core of the enquiry would be the objective determination of whether there exists an “emergency in international relations” (the “subparagraph (iii) test”).

In Russia–Traffic in Transit (2019), the panel went to some lengths in considering what would constitute an “emergency in international relations”. For the purpose of the present discussion, it is perhaps sufficient to notice the following points. First, the panel appeared to interpret “emergency in international relations” liberally; it held that such an expression includes “war”Footnote 64 and “[a]rmed conflict … between governmental forces and private armed groups … (non-international armed conflict)”.Footnote 65

Second, the panel ruled that an “emergency in international relations” must be understood as “eliciting the same type of interests as those arising from the other matters addressed in the enumerated subparagraphs of Article XXI(b)”,Footnote 66 and that such interests are “all defence and military interests, as well as maintenance of law and public order interests”.Footnote 67 According to the panel, while it is “normal to expect that Members will … encounter political or economic conflicts with other Members or states”,Footnote 68 such conflicts “will not be “emergencies in international relations … unless they give rise to defence and military interests, or maintenance of law and public order interests”.Footnote 69

Third, the panel suggested a definition for the expression “emergency in international relations”, which must be reproduced in full:

An emergency in international relations would, therefore, appear to refer generally to a situation of armed conflict, or of latent armed conflict, or of heightened tension or crisis, or of general instability engulfing or surrounding a state. Such situations give rise to particular types of interests for the Member in question, i.e. defence or military interests, or maintenance of law and public order interests.Footnote 70

Summarising this, it appears that the subparagraph (iii) test involves a two-pronged examinationFootnote 71 in determining whether an “emergency in international relations” exists, namely: (a) whether there exists a “situation” of conflict, tension or crisis; and (b) whether such a “situation” gives rise to interests of “defence or military interests, or maintenance of law and public order interests”.Footnote 72

With regard to element (a), case law seems to require that although a “situation” needs to have some degree of seriousness (recall that the panel used the expressions “heightened tension” and “general instability” in the paragraph cited earlier),Footnote 73 such a “situation” does not necessarily need to involve armed conflictFootnote 74 or international conflict. With regard to element (b), recall that the expression “public order” was interpreted in the jurisprudence relating to Art. XIV(a) of the GATS to include a broad range of interests, such as the prevention of gambling.

Returning to the application of the law pertaining to the determination of “emergency in international relations”, the panel report in Russia–Traffic in Transit (2019) cited approvingly the following headings of evidence adduced by Russia, and considered them “sufficient”:Footnote 75

(a) the time-period in which it arose and continues to exist, (b) that the situation involves Ukraine, (c) that it affects the security of Russia’s border with Ukraine in various ways, (d) that it has resulted in other countries imposing sanctions against Russia, and (e) that the situation in question is publicly known.Footnote 76

In considering such evidence, the panel referred to at least two resolutions of the UN General Assembly (UNGA), one of which “ma[de] explicit reference to the Geneva Conventions of 1949”,Footnote 77 as well as several Russian domestic decrees.

Summarising this, it can be concluded that, in examining the existence of an “emergency in international relations”, it is “not relevant”Footnote 78 for a panel or the Appellate Body to determine which actor bears international responsibility for the “situation”, or how the “situation” should be “characterize[d] … under international law in general”.Footnote 79 Instead, a panel or the Appellate Body needs to be persuaded as to the existence of a “situation” (or “element (a)” identified earlier); in doing so, it may consider the following evidence:

  1. (a) whether the international relations in question have “deteriorated to such a degree” that they have become “a matter of concern to the international community”

  2. (b) whether the “situation” was “recognized internationally” or “publicly known”Footnote 80

  3. (c) whether the “situation” “continued to exist”Footnote 81 for some period

  4. (d) whether other countries have imposed sanctions or countersanctions in connection with this “situation”.

2 Availability of the Defence

In applying this jurisprudence to the current question of AI sanctions, it appears that sanctions for the protection of fundamental rights or for the protection of national security will satisfy the requirement of “emergency in international relations”, provided the “situations” involved have the required degree of seriousness.

The determination of the degree of “seriousness” largely depends on the facts of particular cases. Nevertheless, using as an example the EU’s potential ban on China’s “access to technologies used to violate basic rights” due to the instabilities in Hong Kong, it would appear that the Hong Kong “situation”, which involves worldwide controversies with trade powers such as Australia, Canada, China, the EU, New Zealand, the UK and the USA, is likely to have the required degree of “heightened tension or crisis” and seriousness to satisfy element (a) of the “subparagraph (iii) test”.

Turning to element (b) of the subparagraph (iii) test, it is obvious that a “situation” concerning fundamental rights, such as the situation in Hong Kong, would (at least) give rise to interests in public order and possibly security interests. This is especially so when considering the close relationship (elaborated in subsection 1 of Section IV) between the fundamental rights of individual citizens, on the one hand, and international peace and security, on the other. Moreover, a “situation” concerning national security (such as spying) would clearly give rise to interests of defence, military and public order, consequently satisfying element (b) of the test.

As stated earlier, other requirements under Art. XXI(b)(iii) of the GATT 1994 or Art. XIV bis of the GATS are, under the current case law, subjective tests that do not present difficult hurdles to an invoking WTO member (although subject to the “obligation of good faith” requirement).Footnote 82 Assuming that such tests are satisfied, it would appear that if an AI sanction serves the purpose of protecting national security or fundamental rights protection, such a sanction will be eligible for the “security exception” justification.

C Availability of a Defence under the “International Peace and Security” Exceptions

It is also possible that an AI sanction can be justified under the “international peace and security” exceptions, especially Art. XXI(c) of the GATT 1994 and Art. XIV bis (c) of the GATS, both of which allow a member to justify “any action in pursuance of its obligations under the United Nations Charter for the maintenance of international peace and security”. Again, the EU’s proposed export control mechanisms on cyber surveillance items against China and Hong Kong can serve as an example.

The exact ambit of GATT Art. XXI(c) and GATS Art. XIV bis(c) remain uncertain at present, since none of the two provisions have been invoked before any WTO or GATT panel so far. However, using GATT Art. XXI(c) as an example, it appears that the text of this provision entails the following constituent tests: (a) the measure is imposed “in pursuance of” (b) “[the invoking member’s] obligations under the United Nations Charter”, (c) “for the maintenance of international peace and security”.

Moreover, the broad expression “any action”, read together with the lack of any “chapeau” similar to that in GATT Art. XX, seems to indicate that Art. XXI(c) entails less stringent tests than those under Art. XX. Nevertheless, the “obligation of good faith” requirement,Footnote 83 which was first introduced to eliminate members’ “re-label[ling of] trade interests” as “essential security interests” under GATT Art. XXI(b)(iii), might play a similar role in preventing the abuse of Art. XXI(c) justifications.

1 “For the Maintenance of International Peace and Security”

A close examination of the Charter of the United Nations (UN Charter) and the ICCPR shows that it is a well-recognised principle of international law that the protection of fundamental rights for individuals also serves the purpose of maintaining international peace and security. To start with, recall that the preamble of the UN Charter provides, inter alia, that:

We the peoples of the United Nations determined … to reaffirm faith in fundamental human rights, in the dignity and worth of the human person, in the equal rights of men and women[.]

Art. 55(c) of the UN Charter provides that:

With a view to the creation of conditions of stability and well-being which are necessary for peaceful and friendly relations among nations … the United Nations shall promote … universal respect for, and observance of, human rights and fundamental freedoms for all[.]

A collective reading of the UN Charter’s preamble and Art. 55(c), especially the expression “with a view” in Art. 55, shows that the promotion of universal human rights and fundamental freedoms does serve for the maintenance of “peaceful … relations among nations”. Further, Art. 1.3 of the UN Charter states that “promoting and encouraging respect for human rights and for fundamental freedoms” is one of the purposes of the UN.Footnote 84

In addition, the preamble of the ICCPR provides, inter alia, that:

The States Parties … [c]onsider that, in accordance with the principles proclaimed in the Charter of the United Nations, recognition of the inherent dignity and of the equal and inalienable rights of all members of the human family is the foundation of freedom, justice and peace in the world[.]

[The States Parties recognise that] the ideal of free human beings enjoying civil and political freedom and freedom from fear and want can only be achieved if conditions are created whereby everyone may enjoy his [sic] civil and political rights[.]

These provisions reinforce a close causal relationship between the protection of the rights of “all members of the human family” and the achievement of international peace. The expression “in accordance with the principles proclaimed in the [UN] Charter” further confirms the close relationship between the obligations under the ICCPR and the UN Charter. Summarising this, it would appear that the policy aim of protecting fundamental rights is likely to fall within the scope of “for the maintenance of international peace and security”.

2 “Obligations under the UN Charter”

Turning to examine whether the protection of fundamental rights for individuals is an “obligation” under the UN Charter,Footnote 85 one could again be assisted by the earlier-cited UN Charter and ICCPR provisions to find a positive answer to such an enquiry. Further, the preamble of the ICCPR unequivocally recognises an “obligation of States under the [UN Charter] to promote universal respect for, and observance of, human rights and freedoms”; this confirms that the protection of fundamental rights is a Charter obligation.

3 “In Pursuance of …”

Finally, turning to examine whether an AI sanction imposed to protect fundamental rights can satisfy the “in pursuance of” element of such a policy aim, it is perhaps prudent to say that such a determination should only be made in the context of the actual cases. However, note that the term “pursuance” is defined under the Shorter Oxford Dictionary as (inter alia) “[t]he action of trying to attain or accomplish something”, and nothing (in the context, the objective and purpose, etc.) seems to indicate that the ordinary meaning of such a term should depart from its dictionary meaning. As “trying to …” clearly denotes a much weaker causal link than “relating to”, “necessary to” or “essential to”, it would appear that the “in pursuance of” element would, in practice, be relatively easy to satisfy.

4 Availability of the Defence

As the discussions in subsection C of Section IV demonstrate, it is possible for an AI sanction imposed to protect fundamental rights to be justified under the “international peace and security” exceptions, especially Art. XXI(c) of the GATT 1994 and Art. XIV bis of the GATS. In particular, an examination of the UN Charter and the ICCPR provisions can show that the protection of fundamental rights does satisfy the “for the maintenance of international peace and security” requirement. Further, the protection of fundamental rights is also an obligation under the UN Charter. Whether an AI sanction can satisfy the “in pursuance of” requirement necessarily depends on the actual circumstances of a case, but the ordinary meaning of “in pursuance of” does not seem to demand a test as stringent as, for example, the “necessary to” test under “general exceptions”. Accordingly, it is likely that an AI sanction imposed to protect fundamental rights can be justified under the “international peace and security” exceptions.

V Conclusion

AI technologies have brought to humanity benefits and challenges. However, some AI products may be used to threaten non-trade values including fundamental rights and national security.Footnote 86 This is especially so when a trade power pursues controversial AI policies, such as developing AI systems through means that undermine fundamental rights or national security, or using AI systems for purposes of diminishing such values.

AI policies can become controversial among the international community if they (a) undermine fundamental rights, and by doing so threaten international peace and security; (b) threaten national security; and (c) raise fair competition concerns by allowing certain AI developers an unfair advantage under “data-sharing” mechanisms in accessing data to “train” their AI. Suggestions have been made for the international community to develop new disciplines in ensuring that AI technologies are used for the benefit of humanity. Yet at present, economic sanctions taken by trade powers are currently the main deterrence against the adoption of controversial AI policies.

In this chapter, it is argued that WTO law can provide some assistance in controlling controversial AI policies. First, AI policies that promote “data-sharing” mechanisms between government and private AI firms can be challenged as actionable subsidies which transfer data as “raw materials” to the private sector for the latter’s development of AI products.

Second, economic sanctions against WTO members for controversial AI policies, if genuinely held to combat threats to fundamental rights or national security, are likely to be consistent with WTO law: accordingly, WTO law allows liberty for the international community to promote fundamental rights and national security by sanctioning the controversial AI policies that undermine such values. Specifically, some AI sanctions may not contravene non-discriminatory obligations under WTO law, since there might be no “likeness” between AI products that “do” and “do not” attract controversies (such as between mobile “apps” that collect data for racial profiling and those that do not). Assuming that AI sanctions are prima facie inconsistent with WTO rules on trade liberalisation, they may be justified under “public morals exceptions”, “security exceptions” and/or “international peace and security” exceptions or under the GATT 1994 and/or the GATS.

It might be asked whether the WTO law, especially the various exceptions discussed here, can be used to harbour protectionist measures under the guise of fundamental rights or national security concerns. This is unlikely to be so. First, a member may find it difficult to argue that a protectionist measure does not contravene WTO principles of non-discrimination, since the products involved will be “like”. Second, a member would also face difficulties in invoking the “public morals” defence for a protectionist measure, since doing so would involve the stringent “necessity” test and the tests under GATT Art. XX/GATS Art. XIV chapeau. Third, a protectionist measure is unlikely to be defended under security exceptions: as Russia–Traffic in Transit (2019) shows, such a measure would face difficulties in satisfying the “emergency in international relations” requirement and the “good faith” requirement.

Finally, a protectionist measure also cannot be defended under the “international peace and security” exceptions, since it is difficult to establish how a protectionist measure could contribute to the maintenance of international peace and security.

Accordingly, although WTO rules cannot be seen as a “magic pill” that instantly heals the deep divisions of humankind that were perhaps ultimately caused by ideological differences, one should be confident in their contribution to non-trade values such as security and fundamental rights.

Footnotes

12 Public Morals, Trade Secrets, and the Dilemma of Regulating Automated Driving Systems

* The author would like to thank Chia-Chi Chen, I-Ching Chen, Mao-wei Lo, and Si-Wei Lu for their research assistance. Any remaining errors are the author’s sole responsibility.

1 Various terms are used to refer to vehicles equipped with different levels of driving automation systems (a generic term that covers all levels of automation), such as self-driving cars, unmanned vehicles, and automated vehicles. However, as explained in Section II, the inconsistent and sometimes confusing use of terms may lead to regulatory misconceptions. This chapter uses “automated driving systems” to cover level 3–5 systems according to the most widely recognized classification by SAE International. See also Peng’s Chapter 6 in this volume.

2 “Autonomous Vehicle Market Outlook – 2026’ (2018), https://perma.cc/9B5S-GYRE.

3 “IHS Clarifies Autonomous Vehicle Sales Forecast – Expects 21 Million Sales Globally in the Year 2035 and Nearly 76 Million Sold Globally Through 2035’ (IHS Markit, 9 June 2016), https://perma.cc/77J7-VQ56.

4 More specifically, AI algorithms and sensing technologies help to draw a real-time, three-dimensional map of the environment (a 60-meter range around the vehicle), monitor surrounding activities, navigate and operate (e.g., speed, brake, steer, and change gear selection) the vehicle. See Autonomous Vehicle Market Outlook – 2026, Footnote note 2 above. See also HY Lim, Autonomous Vehicles and the Law: Technology, Algorithms and Ethics (Cheltenham, Edward Elgar Publishing, 2019), at 519.

5 See K Kokalitcheva, “Toyota Becomes Uber’s Latest Investor and Business Partner” (Fortune, 24 May 2016), https://perma.cc/254A-7HSX.

6 See K Korosec, “Autonomous Car Sales Will Hit 21 Million by 2035, IHS Says” (Fortune, 7 June 2016), https://perma.cc/4HEX-MHJT.

7 For example, the United States government announced in 2016 its $4 billion investment in automated vehicles. See B Vlasic, “U.S. Proposes Spending $4 Billion on Self-Driving Cars” (New York Times, 14 January 2016), https://perma.cc/36DJ-QKMQ.

8 See A Taeihagh and HSM Lim, “Governing Autonomous Vehicles: Emerging Responses for Safety, Liability, Privacy, Cybersecurity, and Industry Risks” (2018) 39(1) Transport Reviews 103, at 107109; S Nyholm and J Smids, “The Ethics of Accident-Algorithms for Self-Driving Cars: An Applied Trolley Problem?” (2016) 19(5) Ethical Theory & Moral Practice 1275, at 12751289.

9 See the discussion in Section II.

10 In addition, the respective regulatory governance strategies of these countries may change and adapt in light of ongoing economic growth, national security, and business competition issues. Their regulatory endeavors, as well as competition (or cooperation), may also lead to a more coherent global standard-setting process in international arenas. See generally H-W Liu, “International Standards in Flux: A Balkanized ICT Standard-Setting Paradigm and Its Implications for the WTO” (2014) 17(3) Journal of International Economic Law 551; M Du, “WTO Regulation of Transnational Private Authority in Global Governance” (2018) 67(4) International and Comparative Law Quarterly 867.

11 In some cases, the General Agreement on Trade in Services (GATS) may come into play, especially when most ADSs do not fall squarely into either “goods” or “services” in light of the increasing “servitization” of modern manufacturing. See E Lafuente et al., “Territorial Servitization and the Manufacturing Renaissance in Knowledge-Based Economies” (2019) 53(3) Regional Studies 313; T Baines et al., “Servitization of the Manufacturing Firm: Exploring the Operations Practices and Technologies That Deliver Advanced Services” (2014) 34(1) International Journal of Operations & Production Management 2; G Lay (ed.), Servitization in Industry (New York, Springer, 2014). The discussion on service under the GATS is beyond the scope of this chapter, the primary focus of which lies in product-oriented standards and rules.

12 See SAE International, J3016_201806: Surface Vehicle Recommended Practice: (R) Taxonomy and Definitions for Terms Related to Driving Automation Systems for On-Road Motor Vehicles (first issued in January 2014, and revised in June 2018 to supersede J3016, adopted in September 2016) (hereinafter SAE International J3016_201806). This definition and taxonomy is embraced by the United States Department of Transportation (US DoT) and the National Highway Traffic Safety Administration (NHTSA); see US DoT, “Preparing for the Future of Transportation: Automated Vehicles 3.0’ (2018), https://perma.cc/E4WY-AMN3, at 45.

13 Footnote Ibid., at 28.

16 According to the US DoT and NHTSA’s estimation, around 90 percent of car accidents are the result of human error. See US DoT and NHTSA, “Traffic Safety Facts: A Brief Statistical Summary – Critical Reasons for Crashes Investigated in the National Motor Vehicle Crash Causation Survey” (2015), https://perma.cc/JV6M-TC3M. The advent of ADSs may help reduce or even eliminate this human error factor, as these systems promise to outperform human drivers. See Taeihagh and Lim, Footnote note 8 above, at 107–109. See also Y Sun et al., “Road to Autonomous Vehicles in Australia: An Exploratory Literature Review” (2017) 26(1) Road and Transport Research: A Journal of Australian and New Zealand Research and Practice 34, at 3447.

17 See, for example, A von Ungern-Sternberg, “Autonomous Driving: Regulatory Challenges Raised by Artificial Decision-Making and Tragic Choices,” in W Barfield and U Pagallo (eds), Research Handbook on the Law of Artificial Intelligence (Cheltenham, Edward Elgar Publishing, 2018), at 253254; and Taeihagh and Lim, Footnote note 8 above, at 107–109.

18 “After all, humans can be amazing drivers, the performance of advanced automation systems is still unclear … and automation shifts some errors from driver to designer.” BW Smith, “Human Error as a Cause of Vehicle Crashes” (Centre for Internet and Society, 18 December 2013), https://perma.cc/VN5B-SST4.

19 See generally Lim, Footnote note 4 above.

20 See, for example, DM West, “Moving Forward: Self-Driving Vehicles in China, Europe, Japan, Korea, and the United States” (2016), https://perma.cc/8SWG-GX2Y; V Dhar, “Equity, Safety, and Privacy in the Autonomous Vehicle Era” (2016) 49(11) Computer 80, at 80–83; JM Anderson et al., “Autonomous Vehicle Technology: A Guide for Policymakers” (2014), https://perma.cc/5FBA-UVRQ; FD Page and NM Krayem, “Are You Ready for Self-Driving Vehicles?” (2017) 29(4) Intellectual Property and Technology Law Journal 14.

21 See J Boeglin, “The Costs of Self-Driving Cars: Reconciling Freedom and Privacy with Tort Liability in Autonomous Vehicle Regulation” (2015) 17(1) Yale Journal of Law and Technology 171, at 176185; M Gillespie, “Shifting Automotive Landscapes: Privacy and the Right to Travel in the Era of Autonomous Motor Vehicles” (2016) 50 Washington University Journal of Law and Policy 147, at 147169. See also DJ Glancy, “Privacy in Autonomous Vehicles” (2012) 52(4) Santa Clara Law Review 1171; J Schoonmaker, “Proactive Privacy for a Driverless Age” (2016) 25(2) Information & Communications Technology Law 96; S Gambs et al., “De-anonymization Attack on Geolocated Data” (2014) 80(8) Journal of Computer and System Sciences 1597.

22 See SA Bhatti, “Automated Vehicles: Challenges to Full Scale Deployment” (Wavelength, 26 September 2019), https://perma.cc/5J8G-3B4V.

23 See JP Trachtman, “The Internet of Things Cybersecurity Challenge to Trade and Investment: Trust and Verify?” (2019), https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3374542.

24 See, for example, I Coca-Vila, “Self-Driving Cars in Dilemmatic Situations: An Approach Based on the Theory of Justification in Criminal Law” (2018) 12(1) Criminology Law & Philosophy 59; see also FS de Sio, “Killing by Autonomous Vehicles and the Legal Doctrine of Necessity” (2017) 20(2) Ethical Theory and Moral Practice 411.

25 See generally Philippa Foot, “The Problem of Abortion and the Doctrine of the Double Effect,” in Virtues and Vices (Oxford, Basil Blackwell, 1978) (originally appeared in Oxford Review 5, 1967).

26 See K Hao, “Should a Self-Driving Car Kill the Baby or the Grandma? Depends on Where You’re from” (MIT Technology Review, 2018), https://perma.cc/K69S-V8H6.

27 E Awad et al., “The Moral Machine Experiment” (2018) 563 Nature 59.

29 Footnote Ibid., at 62–63.

31 See Coca-Vila, Footnote note 24 above, at 62–66.

32 One commentator also notes that the Trolley Problem and ethical principles might play a less decisive role than predictive legal liabilities that readily translate into monetary constraints on ADS manufacturers that are driven by profits. See B Casey, “Amoral Machines, or: How Roboticists Can Learn to Stop Worrying and Love the Law” (2017) 111 Northwestern University Law Review 231.

33 See J Kleinberg et al., “Discrimination in the Age of Algorithms” (2018) 10 Journal of Legal Analysis 1, at 4.

35 See A Hevelke and J Nida-Rümelin, “Responsibility for Crashes of Autonomous Vehicles: An Ethical Analysis” (2015) 21(3) Science and Engineering Ethics 619, at 619630; and JM Tien, “The Sputnik of Servgoods: Autonomous Vehicles” (2017) 26(2) Journal of Systems Science and Systems Engineering 133, at 133162.

36 See generally H-W Liu and C-F Lin, “Artificial Intelligence and Global Trade Governance: Towards A Pluralist Agenda” (2020) 61 Harvard International Law Journal 407 .

37 See, for example, Liu, Footnote note 10 above.

38 See generally Du, Footnote note 10 above.

39 See US DoT, Footnote note 12 above, at 57–63.

40 Footnote Ibid., at 60.

41 See British Standard Institution, PAS 1885:2018: The Fundamental Principles of Automotive Cyber Security (December 2018); see also United Kingdom Department for Transport, Centre for Connected and Autonomous Vehicles, and Centre for the Protection of National Infrastructure, “The Key Principles of Cyber Security for Connected and Automated Vehicles” (2017), www.gov.uk/government/publications/principles-of-cyber-security-for-connected-and-automated-vehicles/the-key-principles-of-vehicle-cyber-security-for-connected-and-automated-vehicles.

42 Unmanned Vehicles Technology Innovative Experimentation Act (Taiwan) (UV Act). The UV Act was promulgated on 19 December 2018.

43 UV Act, Art. 3.

44 See Taeihagh and Lim, Footnote note 8 above, at 10.

45 See “Federal Ministry of Transport and Digital Infrastructure, Ethics Commission: Automated and Connected Driving” (2017), https://perma.cc/YQ8S-KTE9 (hereinafter 2017 Germany Ethical Commission Report); see also C Lütge, “The German Ethics Code for Automated and Connected Driving” (2017) 30(4) Philosophy and Technology 547.

46 2017 Germany Ethical Commission Report, Footnote note 45 above.

47 2017 Germany Ethical Commission Report, at 6–9 (“Ethical Rules for Automated and Connected Vehicular Traffic”), Rule 2.

48 Footnote Ibid., Rule 4.

49 Footnote Ibid., Rule 7.

50 Footnote Ibid., Rule 8.

51 Footnote Ibid., Rule 9.

52 See Taeihagh and Lim, Footnote note 8 above, at 10.

53 See Lütge, Footnote note 45 above, at 557.

54 Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation, GDPR), Arts. 21 and 22.

55 European Commission, “Building Trust in Human-Centric AI, Ethics Guidelines for Trustworthy AI,” https://perma.cc/M2WL-NL24.

56 Algorithmic Accountability Act of 2019, OLL19293, 116th Congress (2019).

57 For a review of such transnational regulatory initiatives and their normative ramifications, see Liu and Lin, Footnote note 36 above, at 440–450.

58 United Nations Economic Council for Europe (hereinafter UNECE), Economic and Social Council, Inland Transportation Committee, Working Party on Road Traffic Safety, U.N. Doc. ECE/TRANS/WP.1/145 (24–26 March 2014); UNECE, “UNECE Paves the Way for Automated Driving by Updating UN International Convention” (23 March 2016), https://perma.cc/7PNX-2GA4.

59 1968 Vienna Convention on Road Traffic (78 Parties) and the March 2014 Amendment, https://perma.cc/5C8K-Y3ST.

60 See Liu and Lin, Footnote note 36 above, at 410–411.

62 UNECE, “Report of the Sixty-Eighth Session of the Working Party on Road Traffic Safety” (2014), https://perma.cc/JZ3Q-PM62.

63 UNECE, “Report of the Global Forum for Road Traffic Safety on Its Sixty-Seventh Session” (2014), https://perma.cc/RC99-WAXQ (Annex 1, Global Forum for Road Traffic Safety (WP.1) Resolution on the Deployment of Highly and Fully Automated Vehicles in Road Traffic).

64 See Liu and Lin, Footnote note 36 above, at 427–428.

65 SAE International J3016_201806, Footnote note 12 above.

66 International Organization for Standardization, “ISO 26262 Road Vehicles Functional Safety,” https://perma.cc/L4DL-4V97; ISO, “Intelligent Transport Systems–Taxonomy and Definitions for Terms Related to Driving Automation Systems for On-Road Motor Vehicles, ISO/SAE NP PAS 22736” (hereinafter ISO/SAE NP PAS 22736), https://perma.cc/BW2M-SVQK.

67 IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems (IEEE Global Initiative) has launched “Ethically Aligned Design: A Vision for Prioritizing Human Well-Being with Autonomous and Intelligent Systems,” https://perma.cc/BQH5-HGHN.

68 ISO/SAE NP PAS 22736, Footnote note 66 above.

69 See J Pokrzywa, “SAE Global Ground Vehicle Standards” (2019), https://perma.cc/9BV6-LBVQ.

70 See J Shuttleworth, “SAE Standards News: J3016 Automated-Driving Graphic Update” (2019), https://perma.cc/6STW-BXJF. See also Liu and Lin, Footnote note 36 above, at 427.

71 At this moment, it appears challenging to reach multilateral consensus on controversial issues of ADS ethics. As some regulatory initiatives will likely be designed to pursue diverse policy objectives reflecting local values and moral preferences, there may be growing competition among countries.

72 Awad et al., Footnote note 27 above, at 62–63.

73 For an in-depth discussion of China’s social credit system and its impact on social and economic activities, see generally Y-J Chen et al., “‘Rule of Trust’: The Power and Perils of China’s Social Credit Megaproject” (2018) 32(1) Columbia Journal of Asian Law 1.

74 Appellate Body Report, European Communities – Measures Affecting Asbestos and Asbestos-Containing Products, WT/DS135/AB/R (5 April 2001) [EC–Asbestos], para. 99.

75 Footnote Ibid. See also Appellate Body Report, United States – Measures Affecting the Production and Sale of Clove Cigarettes, WT/DS406/AB/R (24 April 2012) [US–Clove Cigarettes], para. 120.

76 Footnote Ibid. Arguably, this market-oriented approach systematically excludes the bases for regulatory distinctions. See JP Trachtman, “WTO Trade and Environment Jurisprudence: Avoiding Environmental Catastrophe” (2017) 58(2) Harvard International Law Journal 273, at 277281.

77 See Trachtman, Footnote note 23 above, at 20.

79 GATT, art. XX(a) and chapeau. See Appellate Body Report, United States – Standards for Reformulated and Conventional Gasoline, WT/DS2/AB/R (20 May 1996) [US–Gasoline], at 22; see also Appellate Body Report, United States – Import Prohibition of Certain Shrimp and Shrimp Products, WT/DS58/AB/R (6 November 1998) [US–Shrimp], paras. 119–120; Appellate Body Report, Brazil – Measures Affecting Imports of Retreaded Tyres, WT/DS332/AB/R (17 December 2017) [Brazil–Retreaded Tyres], para. 139.

80 As noted, however, the discussion on service under the GATS is beyond the scope of this chapter.

81 Appellate Body Report, Colombia – Measures Relating to the Importation of Textiles, Apparel and Footwear, WT/DS461/AB/R (22 June 2016) [Colombia–Textiles], paras. 5.67–5.70.

82 Appellate Body Report, China – Measures Affecting Trading Rights and Distribution Services for Certain Publications and Audiovisual Entertainment Products, WT/DS363/AB/R (19 January 2010) [China–Publications and Audiovisual Products], paras. 239 and 242.

83 Footnote Ibid., paras. 300–311, 326–327.

84 Panel Report, China – Publications and Audiovisual Products, WT/DS363/R (19 January 2010), paras. 7.759 and 7.763; see also Panel Report, United States – Measures Affecting the Cross-Border Supply of Gambling and Betting Services, WT/DS285/R (7 April 2005) [US–Gambling], paras. 6.461 and 6.465.

85 See Appellate Body Report, European Communities – Measures Prohibiting the Importation and Marketing of Seal Products, WT/DS401/AB/R (18 June 2014) [EC–Seal Products], paras. 5.200–5.201. Indeed, WTO members and their societies “are not homogenous, either in their domestic political structures or in their ethical, moral, or religious beliefs.” R Howse et al., “Pluralism in Practice: Moral Legislation and the Law of the WTO After Seal Products” (2015) 48 George Washington International Law Review 81, at 85.

86 Trachtman, Footnote note 23 above, at 21 (citing Appellate Body Report, United States – Measures Affecting the Production and Sale of Clove Cigarettes, WT/DS406/AB/R (24 April 2012), paras. 96–102).

87 TBT Agreement, Arts. 2.1 and 2.2. See Appellate Body Report, United States – Measures Concerning the Importation, Marketing and Sale of Tuna and Tuna Products, Recourse to Article 21.5 of the DSU by Mexico, WT/DS381/AB/RW (3 December 2015), para. 284.

88 Appellate Body Report, United States – Measures Concerning the Importation, Marketing and Sale of Tuna and Tuna Products, WT/DS381/AB/R (13 June 2012) [US–Tuna], at 320, 322.

89 TBT Agreement, Art. 2.4.

90 See Trachtman, Footnote note 23 above, at 22.

91 See Liu and Lin, Footnote note 36 above, at 411, 429–430, 446–447.

92 See generally SK Katyal, “The Paradox of Source Code Secrecy” (2019) 104 Cornell Law Review 101.

93 Footnote Ibid., at 145–146.

94 TRIPS Agreement, Art. 39.1.

95 TRIPS Agreement, Art. 39.2.

96 TRIPS Agreement, Art. 1.2. See World Intellectual Property Organization (WIPO), Introduction to Intellectual Property: Theory and Practice (2nd ed., Alphen aan den Rijn,Wolters Kluwer, 2017), at 243246. See NP de Carvalho, The TRIPS Regime of Antitrust and Undisclosed Information (Alphen aan den Rijn, Kluwer Law International, 2008), at 189190.

97 See J Malbon et al., The WTO Agreement on Trade-Related Aspects of Intellectual Property Rights: A Commentary (Cheltenham, Edward Elgar Publishing, 2014), at 577.

98 Negotiating Group on Trade-Related Aspects of Intellectual Property Rights, including Trade in Counterfeit Goods (1990), Status of Work in the Negotiating Group: Chairman’s Report to the GNG, MTN.GNG/NG11/W/76, Part III, s. 7. 1 A.2.

99 Malbon et al., Footnote note 97 above, at 579.

100 TRIPS Agreement, Art. 8.1.

101 TRIPS Agreement, Art. 7.

102 TRIPS Agreement, Art. 8.2.

103 See F Pasquale, The Black Box Society: The Secret Algorithms That Control Money and Information (Cambridge, MA, Harvard University Press, 2015), at 160161; see also F Pasquale, “Beyond Innovation and Competition: The Need for Qualified Transparency in Internet Intermediaries” (2010) 104 Northwestern University Law Review 105.

104 For instance, China has been accused of forcing foreign companies to disclose sensitive technical data and proprietary source code via a series of administrative processes as a necessary step for market entry, and such data and source code could be passed to domestic competitors. See L Wei and B Davis, “How China Systematically Pries Technology from U.S. Companies” (Wall Street Journal, 26 September 2018), https://perma.cc/ZCV4-DHTK; JY Qin, “Forced Technology Transfer and the US-China Trade War: Implications for International Economic Law,” Wayne State University Law School Research Paper No. 201961 (5 October 2019), 3–4.

105 See, for example, Comprehensive and Progressive Agreement for Trans-Pacific Partnership (CPTPP), Art. 14.17.

106 See generally Pasquale, Footnote note 104 above; and F Pasquale, “Secret Algorithms Threaten the Rule of Law” (MIT Technology Review, 1 July 2017), https://perma.cc/6UYB-86VD.

107 See generally H-W Liu et al., “Beyond State v. Loomis: Artificial Intelligence, Government Algorithmization, and Accountability” (2019) 27(2) International Journal of Law and Information Technology 122.

110 See Footnote ibid. See JV Tu, “Advantages and Disadvantages of Using Artificial Neural Networks versus Logistic Regressions for Predicting Medical Outcomes” (1996) 49 (11) Journal of Clinical Epidemiology 1225; M Aikenhead, “The Uses and Misuses of Neural Networks in Law” (1996) 12(1) Santa Clara Computer and High Technology Law Journal 31, at 33; and P Margulies, “Surveillance by Algorithms: The NSA, Computerized Intelligence Collection, and Human Rights” (2016) 68 Florida Law Review 1045, at 1069.

111 See L Zhou et al., “A Comparison of Classification Methods for Predicting Deception in Computer-Mediated Communication” (2004) 20(4) Journal of Management Information Systems 139, at 150151.

112 See generally MU Scherer, “Regulating Artificial Intelligence Systems: Risks, Challenges, Competencies, and Strategies” (2016) 29(2) Harvard Journal of Law & Technology 353.

13 International Trade Law and Data Ethics Possibilities and Challenges

1 J Yeung, ‘What Is Big Data and What Can Artificial Intelligence Do?’ (Towards Data Science, 30 January 2020), perma.cc/Z7CS-JZQ3.

2 T Philbeck et al., ‘Values, Ethics and Innovation Rethinking Technological Development in the Fourth Industrial Revolution’ (White Paper, World Economic Forum, August 2018), at 4; Organisation for Economic Co-operation and Development (OECD), ‘Data-Driven Innovation for Growth and Well-Being’ (2015), www.oecd.org/sti/ieconomy/data-driven-innovation.htm; World Health Organization, ‘Big Data and Artificial Intelligence’, www.who.int/ethics/topics/big-data-artificial-intelligence/en; NITI Aayog, ‘National Strategy for Artificial Intelligence’ (2018), https://niti.gov.in/national-strategy-artificial-intelligence, at 24–45.

3 D Leslie, ‘Understanding Artificial Intelligence Ethics and Safety’ (Alan Turing Institute, 2019), https://perma.cc/7V82-JRNR, at 4. See also M Brundage et al., ‘The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation’ (Future of Humanity Institute and others, February 2018), https://perma.cc/46NB-8HS2.

4 See generally J Fjeld et al., ‘Principled Artificial Intelligence: Mapping Consensus in Ethical and Rights-Based Approaches to Principles for AI’ (Berkman Klein Center for Internet & Society, 2020).

5 L Floridi and M Taddeo, ‘What Is Data Ethics?’ (2018) 374 Philosophical Transactions 1, at 1.

6 See Ofcom/Cambridge Consultants, ‘Use of AI in Content Moderation’ (2019), https://perma.cc/4WA4-NKVA.

7 See Department for Promotion of Industry and Internal Trade (Government of India), ‘Draft Electronic Commerce Policy’ (2019), https://dipp.gov.in/sites/default/files/DraftNational_e-commerce_Policy_23February2019.pdf, at 30; N Wilson, ‘China Standards 2035 and the Plan for World Domination – Don’t Believe China’s Hype’ (CFR, 3 June 2020), https://perma.cc/K5LX-PDXQ; A Gross et al., ‘Chinese Tech Groups Shaping UN Facial Recognition Standards’ (The Financial Times, 2 December 2019), https://perma.cc/T4VD-A8MD.

8 See subsection B in Section II.

9 World Trade Organization, ‘World Trade Report 2018: The Future of World Trade: How Digital Technologies are Transforming Global Commerce’ (2018), https://perma.cc/7NHM-BCU7, at 32.

10 Henceforth referred to as ‘members’.

11 Henceforth referred to as ‘panels’.

12 See Authority of the House of Lords, ‘Regulating in a Digital World’ (2019), https://perma.cc/YM3H-FG6B; European Parliament, A Comprehensive European Industrial Policy on Artificial Intelligence and Robotics, Doc no. P8_TA-PROV(2019)0081 (12 February 2019); European Commission, ‘Ethics Guidelines for Trustworthy AI’ (2019), https://perma.cc/37YZ-2E59; OECD, Recommendation of the Council on Artificial Intelligence, Doc no. OECD/LEGAL/0449 (22 May 2019); NIST, ‘US Leadership in AI: A Plan for Federal Engagement in Developing Technical Standards and Related Tools’ (2019), https://perma.cc/Z4G7-TUKJ.

13 C Cath and L Floridi, ‘The Design of the Internet’s Architecture by the Internet Engineering Task Force (IETF) and Human Rights’ (2017) 23(2) Science and Engineering Ethics 449, at 455; IEEE, ‘Ethically Aligned Design – First Edition’ (2019), https://perma.cc/6VZ2-EXNC, at 10. In the specific context of AI, see Fjeld et al., Footnote note 4 above.

14 Progress Report of the United Nations High Commissioner for Human Rights on Legal Options and Practical Measures to Improve Access to Remedy for Victims of Business-Related Human Rights Abuses, UN Doc A/HRC/29/39 (May 2015); Montreal Declaration for Responsible Development of Artificial Intelligence (2018); OECD, Footnote note 12 above.

15 Personal Data Protection Commission Singapore, ‘A Proposed Model for Artificial Intelligence Governance Framework’ (January 2019), at 6; Department of Industry, Innovation and Science (Government of Australia), ‘AI Ethics Principles’ (2019), www.industry.gov.au/data-and-publications/building-australias-artificial-intelligence-capability/ai-ethics-framework/ai-ethics-principles; European Commission, Footnote note 12 above.

16 United Nations, ‘A Human-Rights Based Approach to Data’ (2018), https://perma.cc/AX88-85VN.

17 L Taylor, ‘Group Privacy: Big Data and the Collective’ (MyData 2017, 24 September 2017), www.youtube.com/watch?v=BsZ05MVFXLU.

18 For further details, see UNCTAD, ‘Summary of Adoption of E-Commerce Legislation Worldwide’, https://perma.cc/M7MS-E8AF.

19 Fjeld et al., Footnote note 4 above, 49; JA Kroll et al., ‘Accountable Algorithms’ (2017) 165 University of Pennsylvania Law Review 633, at 681.

20 See Human Rights Watch, ‘China: Big Data Fuels Crackdown in Minority Region’ (HRW, 26 February 2018), https://perma.cc/76QL-RTGK.

21 L Yuan, ‘Learning China’s Forbidden History, So They Can Censor It’, (The New York Times, 2 January 2019), https://perma.cc/3G2D-DUNH.

22 See generally Committee on Economic, Social and Cultural Rights, General Comment No 24 on State Obligations under the International Covenant on Economic, Social and Cultural Rights in the Context of Business Activities, UN Doc E/C.12/GC/24 (10 August 2017).

23 See AD Selbst and S Barocas, ‘The Intuitive Appeal of Explainable Mechanisms’ (2018) 87 Fordham Law Review 1085, at 11001120 (on the rationales for explainability of algorithms); Centre for Data Innovation, ‘Re: Competition and Consumer Protection in the 21st Century Hearings’, Project Number P181201, 15 February 2019.

25 S Wachter et al., ‘Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation’ (2017) 7(2) International Data Privacy Law 76, at 78.

26 AD Selbst and J Powles, ‘Meaningful Information and the Right to Explanation’ (2017) 7(4) International Data Privacy Law 233, at 239.

27 M Perel and N Elkin-Koren, ‘Black Box Tinkering: Beyond Disclosure in Algorithmic Enforcement’ (2017) 69 Florida Law Review 181, at 184185, 188; Kroll et al., Footnote note 19 above, at 657, 660 (several technologies employ deep learning AI, which constantly self-learns and improvises its design, increasing the difficulty for engineers to explain the outputs of its algorithms).

28 Kroll et al., Footnote note 19 above, at 642. Similarly, see K Martin, ‘Ethical Implications and Accountability of Algorithms’ (2019) 160 Journal of Business Ethics 835, at 844.

29 DK Citron and F Pasquale, ‘The Scored Society: The Due Process for Automation’ (2014) 89 Washington Law Review 1, 2530.

30 D Lehr and P Ohm, ‘Playing with the Data: What Legal Scholars Should Learn about Machine Learning’ (2017) 51 UC Davis Law Review 653, at 663664.

31 Personal Data Protection Commission Singapore, Footnote note 15 above; Department of Industry, Innovation and Science, Footnote note 15 above; European Commission, Policy and Investment Recommendations for Trustworthy AI (26 June 2019); European Commission, Structure for a White Paper on Artificial Intelligence – A European Approach (2020) (leaked draft), https://perma.cc/M7QH-UEQV, at 16–17; UK House of Lords (Select Committee on Artificial Intelligence), AI in the UK: Ready, Willing and Able? (16 April 2018).

32 UK House of Lords, Footnote ibid., 128; Personal Data Protection Commission Singapore, Footnote note 15 above, at 6; Department of Industry, Innovation and Science, Footnote note 15 above.

33 European Commission, ‘Ethics and Data Protection’ (2018), https://perma.cc/V2C4-8KBK.

34 See Article 29 Data Protection Working Party, Guidelines on Automated Individual Decision-Making and Profiling for the Purposes of Regulation 2016/679 (6 February 2018), at 10.

35 Profiling is defined to include any form of automated processing that considers an individual’s personal information to analyse their lives. See GDPR art. 4(4).

36 See Wachter et al., Footnote note 25 above; Selbst and Powles, Footnote note 26 above; L Edwards and M Veale, ‘Slave to the Algorithm? Why a “Right to Explanation” Is Probably Not the Remedy You’re Looking For’ (2017) 16 Duke Law & Technology Review 18.

37 See L Edwards and M Veale, ‘Enslaving the Algorithm: From a “Right to an Explanation” to a “Right to Better Decisions”?’ (May/June 2018) AI Ethics 46, at 48.

38 See Algorithmic Accountability Act of 2019 (Proposed Bill), https://perma.cc/V5UQ-LZ53.

39 See M Wu, ‘Digital Trade-Related Provisions in Regional Trade Agreements: Existing Models and Lessons for the Multilateral Trade System’ (2017) ICTSD, at 25.

40 Digital Economy Partnership Agreement (DEPA), art. 8.2.

41 See generally C Ryngaert and M Taylor, ‘The GDPR as Global Data Protection Regulation?’ (2020) 114 AJIL Unbound 5.

42 A Roberts et al., ‘Toward a Geoeconomic Order in International Trade and Investment’ (2019) 22(4) Journal of International Economic Law 655, at 673675.

43 See K Nissim et al., ‘Bridging the Gaps Between Computer Science and Legal Approaches to Privacy’ (2018) 31(2) Harvard Journal of Law & Technology 689.

44 N Mishra, ‘Privacy, Cybersecurity, and GATS Article XIV: A New Frontier for Trade and Internet Regulation?’ (2020) 19 World Trade Review 341, at 344–346.

45 GDPR, arts. 44–45. See also ‘Adequacy Decisions’ (European Commission), https://ec.europa.eu/info/law/law-topic/data-protection/international-dimension-data-protection/adequacy-decisions_en.

46 See J Vainan, ‘Microsoft Just Built a Special Version of Windows for China’ (Fortune, 23 May 2017), https://perma.cc/WG34-F7FK; B Darrow, ‘IBM Gives China Sneak Peek of Software Source Code: Report’ (Fortune, 16 October 2015), https://perma.cc/F2N5-6MRE.

47 Such a measure could also violate the Agreement on Trade-Related Aspects of Intellectual Property Rights (TRIPS) art. 39, for example, if the measure affects vital commercial interests of foreign companies by increasing the chances of trade secret theft. However, this chapter does not cover justifications of data ethics-related measures under TRIPS. See White House, ‘How China’s Economic Aggression Threatens the Technologies and Intellectual Property of the United States and the World’ (June 2018), https://perma.cc/4ZE6-FQ89. See also JY Qin, ‘Forced Technology Transfer and the US–China Trade War: Implications for International Economic Law’ (2019) 22(4) Journal of International Economic Law 743, at 745746.

48 See RH Weber, ‘Regulatory Autonomy and Privacy Standards under the GATS’ (2012) 7 Asian Journal of WTO and International Health Law & Policy 25; S Yakovleva and K Irion, ‘The Best of Both Worlds: Free Trade in Services and EU Law on Privacy and Data Protection’ (2016) 2(2) European Data Protection Law Review 191.

49 See AB Report, Mexico – Taxes on Soft Drinks [79] (‘laws and regulations’ refers to domestic laws and regulation, and not international law, unless it is incorporated into domestic law).

50 Panel Report, Colombia – Ports of Entry [7.514]; AB Report, US – Shrimp (Thailand) [7.174]. See also AB Report, Korea – Various Measures on Beef [157]; AB Report, Thailand – Cigarettes (Philippines) [177]; AB Report, US – Gambling, [6.536] – [6.537].

51 For evolutionary interpretation, see AB Report, US – Shrimp [129].

52 For a more detailed analysis, see Mishra, Footnote note 44 above, at 352.

53 See Panel Report, US – Gambling [6.648].

54 GATS, art. XIV(a), Footnote note 5.

55 M Wu, ‘Free Trade and the Protection of Public Morals: An Analysis of the Newly Emerging Public Morals Clause Doctrine’ (2008) 33 Yale Journal of International Law 215; S Charnovitz, ‘The Moral Exception in Trade Policy’ (1998) 38 Vanderbilt Journal of International Law 689, at 743.

56 Panel Report, US – Gambling [6.461].

57 See generally G Marceau, ‘Evolutive Interpretation by the WTO Adjudicator’ (2018) 21(4) Journal of International Economic Law 791. For a discussion of WTO disputes on public morals, see RY Simo, ‘Trade and Morality: Balancing Between the Pursuit of Non-Trade Concerns and the Fear of Opening the Floodgates’ (2019) 51 George Washington International Law Review 407.

58 Panel Report, China – Publications and Audiovisual Products [7.759].

59 AB Report, US – Gambling [296].

60 AB Reports, EC – Seal Products [5.199].

61 Panel Report, Brazil – Taxation [7.591]–[7.592].

62 Panel Report, Colombia – Textiles [7.338]–[7.339]; AB Report, Colombia – Textiles [5.105].

63 Panel Report, EC – Seal Products [7.381]–[7.383].

64 AB Report, EC – Seal Products [5.198].

65 AB Report, EC – Seal Products [5.199].

66 AB Report, EC – Seal Products [5.200].

67 See JQ Whitman, ‘The Two Western Cultures of Privacy: Dignity Versus Liberty’ (2004) 113 Yale Law Journal 1151.

68 Scholars have generally advocated that ‘public morals’ could include universal human rights. See C Glinski, ‘CSR and the Law of the WTO: The Impact of Tuna Dolphin II and EC–Seal Product’ (2017) 1 Nordic Journal of Commercial Law 121, at 133.

69 G Marceau, ‘WTO Dispute Settlement and Human Rights’ (2002) 13(4) European Journal of International Law 753, at 761, 777, 813814; SM Zonaid, ‘Trading in Human Rights: Questioning the Advance of Human Rights into the World Trade Organization’ (2015) 27 Florida Journal of International Law 261, at 286.

70 AB Report, US – Gambling [292].

71 AB Report, EC – Seal Products [5.302].

72 See discussion in subsection A, Section II.

73 See T Maurer et al., ‘Technological Sovereignty: Missing the Point?’, in M Maybaum et al. (eds), Architectures in Cyberspace (Tallinn, NATO CCD COE Publications, 2015), at 53, 6162; K Komaitis, ‘The “Wicked Problem” of Data Localization’ (2017) 3(2) Journal of Cyber Policy 355, at 361362.

74 IEEE, Footnote note 13 above, at 28.

75 AB Report, China – Publications and Audiovisual Products [306].

76 See JP Meltzer, ‘The Impact of Artificial Intelligence on International Trade’ (2018), https://perma.cc/A3H7-FXVB (in the context of AI-driven technologies); A Goldfarb and D Trefler, ‘AI and International Trade’ (2017), https://perma.cc/5Z9K-29EK, at 24–29.

77 AB Report, US – Gambling, [308]; AB Report, China – Publications and Audiovisual Products [326]–[327]; AB Report, EC – Seal Products [5.279].

78 See AB Report, Brazil – Retreaded Tyres [156]; AB Report, China – Publications and Audiovisual Products [246].

79 See S-Y Peng, ‘The Rule of Law in Times of Technological Uncertainty: Is International Economic Law Ready for Emerging Supervisory Trends?’ (2019) 22 Journal of International Economic Law 1, at 1315.

80 See AB Report, Brazil – Retreaded Tyres [151], [211]; Panel Report, China – Rare Earths [7.186]; Panel Report, Australia – Plain Packaging [7.1384]–[7.1391].

81 L DeNardis and M Raymond, ‘The Internet of Things as a Global Policy Frontier’ (2017) 15 UC Davis Law Review 475, at 493.

82 AB Report, US – Shrimp [158].

83 AB Report, EC – Seal Products [5.302].

84 Ibid.

85 G Moon, ‘A “Fundamental Moral Imperative”: Social Inclusion, the Sustainable Development Goals and International Trade Law After Brazil- Taxation’ (2018) 52(6) Journal of World Trade 995, at 1004.

86 S Nuzzo, ‘Tackling Diversity Inside WTO: GATT Moral Clause After Colombia – Textiles’ (2017) 10(1) European Journal of Legal Studies 267, at 290292; JC Marwell, ‘Trade and Morality: The WTO Public Morals Exception After Gambling’ (2006) 81 New York University Law Review 802, 805.

87 See generally Mishra, Footnote note 44 above.

88 Kroll et al., Footnote note 19 above, at 642.

89 Wachter et al., Footnote note 25 above, at 99.

90 See C Sabel et al., ‘Regulation under Uncertainty: The Coevolution of Industry and Regulation’ (2018) 12 Regulation and Governance 371, at 373, 375 (arguing that uncertainties can prompt coordination among firms and between firms and regulatory bodies).

91 GATS art. VI:4 read with art. VI:5 allows panels to only take into account technical standards of multi-lateral institutions. A possible route is exploring technical barrier to trade-like provisions for trade in services.

14 Disciplining Artificial Intelligence Policies World Trade Organization Law as a Sword and a Shield

* An earlier version of this chapter received the Young Scholar Award from the Asian International Economic Law Network (AIELN) in 2019. The authors give thanks to Peter Van den Bossche, Ching-Fu Lin, Shin-yi Peng, Thomas Streinz and Rolf H. Weber for their comments.

1 “Secretary-General’s Message for Third Artificial Intelligence for Good Summit” (United Nations, 28 May 2019), https://perma.cc/B5HW-RV5U (hereinafter SG Message for AI).

2 Congressional Research Service (CRS), “Artificial Intelligence and National Security” (2019), https://perma.cc/B5TC-J2U9</int_i, at 10.

3 Footnote Ibid., at 3. For discussions on the dual use of AI technologies, see M Brundage et al., “The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation” (2018), https://perma.cc/Z2KJ-WYJ3, at 79.

4 “Commission White Paper on Artificial Intelligence: A European Approach to Excellence and Trust” (2020), https://perma.cc/24AE-UJGM, at 10; see also J Purshouse and L Campbell, “Privacy, Crime Control and Police Use of Automated Facial Recognition Technology” (2019) 3 Criminal Law Review 188 (arguing that England and Wales should adopt a “narrower and more prescribed legal framework” in their use of facial recognition to comply with international law).

5 “G20 Ministerial Statement on Trade and Digital Economy” (2019), https://perma.cc/WCC2-J32P.

6 OECD, “Recommendation of the Council on Artificial Intelligence” (2019) OECD/LEGAL/0449, https://perma.cc/DV5K-B6A3.

7 See M Risse, “Human Rights and Artificial Intelligence: An Urgently Needed Agenda” (2018), https://perma.cc/SX67-78YE; and Beijing Academy of Artificial Intelligence, “Beijing AI Principles” (2019), https://perma.cc/GB28-8J6A. For comments concerning the “Beijing Principles”, see W Knight, “Why Does Beijing Suddenly Care about AI Ethics?” (MIT Technology Review, 31 May 2019), https://perma.cc/3KDH-QHJJ.

8 See J Thornhill, “Formulating Values for AI Is Hard When Humans Do Not Agree” (Financial Times, 22 July 2019), https://perma.cc/5XAG-JQXC.

9 Such instruments are commonly referred to as “autonomous sanctions” by Australia and the United States. Shaw convincingly argued that non-military sanctions are “legitimate method[s] of showing displeasure” and do not contravene general public international law. MN Shaw, International Law (8th ed., Cambridge, Cambridge University Press, 2017), 859.

10 See subsection C in Section II for detailed examples.

11 See Notice of the State Council on Issuing the Development Plan on the New Generation of Artificial Intelligence (PRC), which sets out a strategy of “civil–military integration” in the PRC’s AI development plan, and seeks to promote the “sharing and joint use” of AI innovation platforms including data resources, and cloud service platform etc.

12 See CRS, Footnote note 2 above, at 21. See also “Military-Civil Fusion and the People’s Republic of China”, https://perma.cc/4DMR-B2K9.

13 CRS, note 2 above, at 20.

14 See M Wang, China’s Algorithms of Repression: Reverse Engineering a Xinjiang Police Mass Surveillance App (New York, Human Rights Watch, 2019); Purshouse and Campbell, note 4 above.

15 See A Polyakova, “Weapons of the Weak: Russia and AI-Driven Asymmetric Warfare” (Brookings, November 2001), https://perma.cc/CG8V-U4QA (arguing that Russian institutions interfered with the 2016 US presidential elections by AI-powered propaganda), and “The Propaganda Tools Used by Russians to Influence the 2016 Election” (New York Times, 16 February 2018), https://perma.cc/CP7C-BV4L.

16 SG Message for AI, note 1 above.

18 “Treasury Sanctions Chinese Entity and Officials Pursuant to Global Magnitsky Human Rights Accountability Act” (U.S. Department of the Treasury, 9 July 2020), https://perma.cc/8ZFE-9XK8. For background information, see “Xinjiang Supply Chain Business Advisory” (2020), https://perma.cc/G53R-D66W. Also see Wang, Footnote note 14 above.

19 “European Parliament Resolution on the PRC National Security Law for Hong Kong and the Need for the EU to Defend Hong Kong’s High Degree of Autonomy” (2020) 2020/2665, at para. 13. For an earlier resolution, see “European Parliament Resolution of 18 July 2019 on the Situation in Hong Kong” (2019) 2019/2732(RSP), at para. 11.

20 “Secretary Michael R. Pompeo with Laura Ingraham of Fox News” (US Department of State, 6 July 2020), https://perma.cc/HLN9-UVUN; F Ryan et al., “Mapping More of China’s Technology Giants: AI and Surveillance” (2019) ASPI Issues Paper Report No. 24 (proposing that TikTok is a “vector for censorship and surveillance”, empowered by an AI‑powered algorithm).

21 See “China: India’s Ban on Chinese Apps May Violate WTO Rules” (China Global Television Network, 1 July 2020), https://perma.cc/7EEA-HVE8 (reporting that Ji Rong, spokesperson for the Chinese embassy in New Delhi, said that India’s ban of certain Chinese mobile apps including TikTok and WeChat “runs against fair and transparent procedure requirements, abuses national security exceptions and (is suspected of) violating WTO rules”).

22 S Lester and H Zhu, “A Proposal for ‘Rebalancing’ to Deal with ‘National Security’ Trade Restrictions” (2019) 42 Fordham International Law Journal 1451.

23 W Zhou and Q Kong, “Why Australia’s Huawei Ban Is Unjustifiable under WTO” (China Global Television Network, 29 April 2019), https://perma.cc/A4DW-3YB5.

24 J Fernyhough, “Australia’s Huawei ban on shaky ground at WTO” (Australian Financial Review, 15 April 2019), https://perma.cc/EH2L-U3NT

25 Panel Report, US–Offset Act (Byrd Amendment) (2002), para. 7.106.

26 Appellate Body Report, US–Carbon Steel (India) (2014), para. 4.8; Article 1.1 of the SCM Agreement.

27 Appellate Body Report, US–Softwood Lumber IV (2004), para. 52.

28 A Boerding et al., “Data Ownership: A Property Rights Approach from a European Perspective” (2018) 11 Journal of Civil Law Studies 330; M Burri, “The Regulation of Data Flows through Trade Agreements” (2017) 48 Georgetown Journal of International Law 446.

29 F Casalini and JL González, “Trade and Cross-Border Data Flows” (2019) OECD Trade Policy Papers, No. 220.

30 J Sadowski, “When Data Is Capital: Datafication, Accumulation, and Extraction” (2019) 6 Big Data & Society 1.

31 See N Lindsey, “State DMVs Selling Personal Data for Millions of Dollars in Profit” (CPO Magazine, 18 September 2019), https://perma.cc/7SBW-KRF3.

32 See Appellate Body Report, US–Large Civil Aircraft (2nd complaint) (2012), para. 613.

33 Appellate Body Report, US–Softwood Lumber IV (2004), para. 64.

34 Appellate Body Report, US–Large Civil Aircraft (2nd complaint) (2012), paras 608–609.

35 Footnote Ibid., at para. 624.

36 Appellate Body Report, US–Softwood Lumber IV (2004), para. 69.

37 Appellate Body Report, US–Large Civil Aircraft (2nd complaint) (2012), para. 618.

38 Panel Report, EC and Certain Member States–Large Civil Aircraft (2011), para. 7.1036.

39 Footnote Ibid., at para. 7.1039.

40 The panel in US–Large Civil Aircraft (2nd complaint) (Recourse to Article 21.5) (2019) ruled that patents and right to data cannot be treated as “goods” within the meaning of Article 1.1(a)(1)(iii) since they are intangible (para. 8.832); such a ruling was rejected in the appeal (paras 5.70–5.77).

41 This is because of the high-tech nature of the AI industry and economies of scale.

42 Appellate Body Report, Canada–Renewable Energy (2013), para. 5.163; see also Appellate Body Report, Canada–Aircraft (1999), para. 157.

43 Such a determination would necessarily depend on the facts of actual cases.

44 Panel Report, US–Upland Cotton (2004), para. 7.1143; see also Appellate Body Report, EC and Certain Member States–Large Civil Aircraft (2011), para. 949.

45 Panel Report, EC and Certain Member States–Large Civil Aircraft (2011), para. 7.974.

46 In an actual case where the legislation in question is available, it is even possible that an assessment of the legislation will lead one to conclude that such a mechanism constitutes de jure specificity.

47 Appellate Body Report, EC–Seal Products (2014), para. 5.169, referring to Appellate Body Report, US–Gasoline (1996), 22.

48 Appellate Body Report, EC–Seal Products (2014), para. 5.168.

49 Appellate Body Report, EC–Seal Products (2014), para. 5.169, referring to Panel Report, US–Gambling (2005), para. 6.455.

50 Panel Report, US–Gambling (2005), para. 6.465. Note that US–Gambling (2005) is a case concerning Art. XIV(a) of the GATS. The interpretation in US–Gambling (2005) was subsequently adopted in the context of Article XX(a) of the GATT 1994 by the panels in China–Publications and Audiovisual Products (2009) (in para. 7.759) and EC–Seal Products (2014) (in para. 7.380). None of these interpretations was appealed.

51 Footnote Ibid., at para. 6.461. This was followed in Panel Report, EC–Seal Products (2014), para. 7.380 and confirmed in Appellate Body Report, EC–Seal Products (2014), paras 5.199–200.

52 Panel Report, EC–Seal Products (2014), para. 7.383.

60 This chapter assumes that the panel report in Russia–Traffic in Transit (2019), which was not appealed by either party, represents good law.

61 Panel Report, Russia–Traffic in Transit (2019), para. 7.131.

62 Footnote Ibid., at para. 7.133.

63 Footnote Ibid., at para. 7.77; also see para. 7.82.

64 Footnote Ibid., at para. 7.72.

66 Footnote Ibid., at para. 7.74.

68 Footnote Ibid., at para. 7.75.

70 Footnote Ibid., at para. 7.76.

71 However, note the somewhat cautious language used in the panel report: “An emergency in international relations would, therefore, appear to refer generally” (para. 7.76).

72 It appears that Russia–Traffic in Transit (2019) considers this to be a closed list: paras 7.74 and 7.76.

73 Panel Report, Russia–Traffic in Transit (2019), at para. 7.76.

74 Recall that the panel in Russia–Traffic in Transit (2019) ruled that “latent armed conflict, or of heightened tension or crisis, or of general instability engulfing or surrounding a state” would constitute “emergency in international relations”.

75 Panel Report, Russia–Traffic in Transit (2019), at para. 7.119.

76 Footnote Ibid., at para. 7.119.

77 Footnote Ibid., at footnote 204.

78 Footnote Ibid., at para. 7.121.

80 Footnote Ibid., at para. 7.119.

82 Panel Report, Russia–Traffic in Transit (2019), para. 7.133.

83 Ibid.

84 See also K Kenny, “Fulfilling the Promise of the UN Charter: Transformative Integration of Human Rights” (1999) 10 Irish Studies in International Affairs 44 (arguing that international conflicts in the 1990s confirmed that human rights violations could lead to the escalation of international conflict, thus the promotion of “respect for human rights” is crucial for the UN’s purpose of “the maintenance of international peace and security”).

85 It is obvious that the general “maintenance of international peace and security” is an UN Charter obligation. For example, Art. 43(1) of the Charter provides that “All Members … in order to contribute to the maintenance of international peace and security, undertake to make available to the Security Council, on its call”.

86 See Brundage et al., note 3 above, at 3.

Save book to Kindle

To save this book to your Kindle, first ensure coreplatform@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×