Skip to main content Accessibility help
×
Hostname: page-component-848d4c4894-cjp7w Total loading time: 0 Render date: 2024-07-06T20:30:12.183Z Has data issue: false hasContentIssue false

12 - Public Morals, Trade Secrets, and the Dilemma of Regulating Automated Driving Systems

from Part IV - International Economic Law Limits to Artificial Intelligence Regulation

Published online by Cambridge University Press:  01 October 2021

Shin-yi Peng
Affiliation:
National Tsing Hua University, Taiwan
Ching-Fu Lin
Affiliation:
National Tsing Hua University, Taiwan
Thomas Streinz
Affiliation:
New York University School of Law

Summary

Automated driving systems (ADSs) are growing exponentially as one of the most promising AI applications. ADSs promise to transform ways in which people commute and connect with one another, altering the conventional division of labor, social interactions, and provision of services. Regulatory issues such as testing and safety, cybersecurity, connectivity, liability, and insurance are driving governments to establish comprehensive and consistent policy frameworks. Of key importance is ADSs’ ethical challenges. How to align ADS development with fundamental ethical principles embedded in a society remains a difficult question. The “Trolley Problem” aptly demonstrates such tension. While it seems essential to have rules and standards reflecting local values and contexts, potential conflicts and duplication may have serious trade implications in terms of how ADS is designed, manufactured, distributed, serviced, and driven across borders. This chapter examines the multifaceted, complex regulatory issues related to ADS and uses the most controversial, ethical dimension to analyze the tensions between the protection of public morals and trade secrets under the WTO. It unpacks three levels of challenges that may translate into a regulatory dilemma in light of WTO members’ rights and obligations under GATT, TBT Agreement, and the TRIPS Agreement and identifies possible venues of reconfiguration.

Type
Chapter
Information
Artificial Intelligence and International Economic Law
Disruption, Regulation, and Reconfiguration
, pp. 237 - 254
Publisher: Cambridge University Press
Print publication year: 2021
Creative Commons
Creative Common License - CCCreative Common License - BYCreative Common License - NCCreative Common License - ND
This content is Open Access and distributed under the terms of the Creative Commons Attribution licence CC-BY-NC-ND 4.0 https://creativecommons.org/cclicenses/

I Introduction

The market for automated driving systems (ADSs, commonly referred to as automated vehicles, autonomous cars, or self-driving cars)Footnote 1 is predicted to grow from US$54.2 billion in 2019 to US$556.6 billion in 2026.Footnote 2 Around 21 million in sales of vehicles equipped with ADSs globally in 2035, and 76 million in sales through 2035,Footnote 3 are expected in an inextricably connected global market of automobiles, information and communication technology (ICT), and artificial intelligence (AI) platforms and services, along a massive value chain that transcends borders. Indeed, ADSs – one of the most promising AI applications – build on software infrastructure that works with sensing technologies such as Light Detection and Ranging (LiDAR), radar, and high-resolution cameras to perform part or all of the dynamic driving tasks.Footnote 4 The ADS industry landscape is complex and dynamic, including not only automobile companies and suppliers (e.g., Daimler AG, Ford Motor Company, BMW AG, Tesla Inc., and Denso Corporation), but also ICT giants (e.g., Waymo, Intel Corporation, Apple Inc., NVIDIA Corporation, Samsung, and Baidu) and novel service providers (e.g., Uber, Lyft, and China’s Didi Chuxing) in different parts of the world. There have also been an increasing number of cross-sectoral collaborative initiatives between such companies, including the partnership between Uber and Toyota to expand the ride-sharing market,Footnote 5 or General Motor’s investment in Lyft, undertaken with the goal of developing self-driving taxis.Footnote 6

While governments around the world have been promoting ADS development and relevant industries,Footnote 7 they have also been contemplating rules and standards in response to its legal, economic, and social ramifications. Apart from road safety and economic development,Footnote 8 ADSs promise to transform the ways in which people commute between places and connect with one another, which will further alter the conventional division of labor, social interactions, and the provision of services. Regulatory requirements for testing and safety, as well as technical standards on cybersecurity and connectivity, are necessary for vehicles with ADSs to be allowed on roadways, but governments worldwide have not established comprehensive and consistent policy frameworks within their jurisdictions because of the experimental nature of related technologies, not to mention multilateral consensus or harmonization. Furthermore, liability rules, insurance policies, and new law enforcement tools are also relevant issues, if not prerequisites. Last but not least, ethical challenges posed by ADSs play a key role in building trust and confidence among consumers, societies, and governments to support the wide and full-scale application. How to align ADS research and development with fundamental ethical principles embedded in a given society – with its own values and cultural contexts – remains a difficult policy question. The “Trolley Problem” aptly demonstrates such tension.Footnote 9 As will be discussed, such challenges not only touch upon substantive norms, such as morality, equality, and justice, but also call for procedural safeguards, such as algorithmic transparency and explainability.

Faced with such challenges, governments are designing and constructing legal and policy infrastructures with diverse forms and substances to facilitate the future of connected transportation. Major players along the global ADS value chain have yet to agree upon a common set of rules and standards to forge regulatory governance on a global scale, partly because of different political agendas and strategic positions.Footnote 10 While it seems essential to have rules and standards that reflect local values and contexts, potential conflicts and duplication may have serious World Trade Organization (WTO) implications. In Section II, this chapter examines key regulatory issues of ADSs along the global supply chain. Regulatory efforts and standard-setting processes among WTO members and international (public and private) organizations also evidence both the convergence and divergence in different issues. While regulatory issues such as liability, cybersecurity, data flow, and infrastructure are multifaceted, complex, and fluid, and certainly merit scholarly investigation, this chapter cannot and does not intend to cover them all. Rather, in Section III, this chapter uses the most controversial (but not futuristic) issue – the ethical dimension of ADSs, which raises tensions between the protection of public morals and trade secrets – to demonstrate the regulatory dilemma faced by regulators and its WTO implications. It points out three levels of key challenges that may translate into a regulatory dilemma in light of WTO members’ rights and obligations, including those in the General Agreement on Tariffs and Trade (GATT), the Agreement on Technical Barriers to Trade (TBT Agreement), and the Agreement on Trade-Related Aspects of Intellectual Property Rights (TRIPS Agreement).Footnote 11 Section IV concludes.

II Automated Driving Systems: Mapping Key Regulatory Issues

A Regulatory Challenges Facing Automated Driving Systems and the “Moral Machine” Dilemma

At the outset, the use of terminology and taxonomy must be clarified. There exist various terms that are used to refer to vehicles equipped with different levels of driving automation systems (a generic term that covers all levels of automation), such as self-driving cars, unmanned vehicles, autonomous cars, and automated vehicles. However, for reasons to be elaborated later, this chapter consciously uses “ADSs” – namely, level 3–5 systems as defined by the SAE International’s taxonomy and definitionsFootnote 12 – to refer to the kinds of driving automation that require only limited human intervention and that more appropriately denote the essence of commonly known terms such as “self-driving cars” or “autonomous vehicles.” Indeed, the inconsistent and sometimes confusing use of terms such as “self-driving cars” or “autonomous vehicles” may lead to problems not only related to misleading marketing practices, mistaken consumer perceptions, and information asymmetry, but also insufficient and ineffective regulatory design. For instance, in the robotics and AI literature, the term “autonomous” has been used to denote systems capable of making decisions and acting “independently and self-sufficiently,”Footnote 13 but the use of such terms “obscures the question of whether a so-called ‘autonomous vehicle’ depends on communication and/or cooperation with outside entities for critical functionality (such as data acquisition and collection).”Footnote 14 Some products may be fully autonomous as long as their functions are executed entirely independently and self-sufficiently to the extent entailed in level 5, while others may depend on external cooperation and connection to work (which may fall under the scope of level 3 or level 4). Yet when the term “autonomous vehicle” is commonly used to refer to level 5, levels 3 and 4, or even all levels of driving automation as defined in various legislation enacted in different states,Footnote 15 regulatory confusion ensues. Comparable conceptual and practical problems can also be found with the use of “self-driving,” “automated,” or “unmanned” in regulatory discourse.

While ADSs offer many benefits to road safety, economic growth, and transportation modernization,Footnote 16 myriad regulatory issues – such as safety, testing and certification, liability and insurance, cybersecurity, data flow, ethics, connectivity, infrastructure, and service – must be appropriately addressed.Footnote 17 First, reducing human errors does not mean that ADSs are free from machine error, especially when the technology continues to grow in complexity.Footnote 18 A review of recent incidents involving Tesla and Volvo-Uber systems suggests that ADSs may be subject to different standards of care, considering the many new safety threats and consumer expectations for the technology.Footnote 19 Other commentators also point to cybersecurity and industry risks related to ADSs, given their reliance on data collection, processing, and transmission through vehicle-to-vehicle and vehicle-to-infrastructure communications.Footnote 20 The multifaceted yet under-addressed issues of privacy and personal freedom also call for clearer rules and standards.Footnote 21 Issues including the Internet of Things (IoT), 5G networks, and smart city development – which are beyond the scope of this chapter – also play a crucial role in the regulatory discourse surrounding ADSs.Footnote 22 The different risks posed by ADSs and IoT and their consequential interactions with the physical world may have crucial ramifications for international trade and investment law.Footnote 23

This chapter will not exhaust all of these regulatory issues, but rather focuses on the most controversial, ethical dimension of ADSs. There are concerns about the “crash algorithms” of ADSs, which are the programs that decide how to respond at the time of unavoidable accidents.Footnote 24 Ethical issues stem from the infamous “Trolley Problem,” a classic thought experiment of utilitarianism vis-à-vis deontological ethics introduced in 1967 by Philippa Foot.Footnote 25 It involves a runaway, out-of-control trolley moving toward five people who are tied up and lying on the main track. You are standing next to a lever that can switch the trolley to a side track, on which only one tied-up person is lying. The problem? Would you pull the lever to save five and kill one? What is the right thing to do? In modern times, the advent of ADSs makes the Trolley Problem, once an exercise of applied philosophy, a real-world challenge rather than an ethical thought experiment.Footnote 26 Should ADSs prioritize the lives of the vehicle’s passengers over those of pedestrians? Should ADSs kill the baby, the doctor, the mayor, the jaywalker, or the grandma? Or should ADSs be programmed to reach a decision that is most beneficial to society as a whole, taking into account a massive range of factors? Researchers at the Massachusetts Institute of Technology (MIT) designed scenarios representing ethical dilemmas that call upon people to identify preferences for males, females, the young, the elderly, low-status individuals, high-status individuals, law-abiding individuals, law-breaking individuals, and even fit or obese pedestrians in a fictional, unavoidable car crash.Footnote 27 They collected and consolidated around 40 million responses provided by millions of individuals from 233 jurisdictions and published their results in an article titled “The Moral Machine Experiment.”Footnote 28 How does the world respond to the Trolley Problem? While a general, global moral preference can be found, there exist strong and diverse demographic variations specifically associated with “modern institutions” and “deep cultural traits.”Footnote 29 For instance, respondents from China, Japan, Taiwan, South Korea, and other East Asian countries prefer saving the elderly over the young, while those in North America and Europe are the opposite.Footnote 30

As ADSs cannot be subjectively assessed ex post for blame or moral responsibility, it seems necessary – yet it is unclear how – to design rules to regulate the reactions of ADSs when faced with moral dilemmas.Footnote 31 Presumably, ethics as well as cultural, demographic, and institutional factors may play a role in likely heterogeneous regulatory measures that could increase frictions in international trade. From a practical, legalist perspective, different tort systems in varying jurisdictions may also have an anchoring effect on ADS designs.Footnote 32 While the decision at the time of unavoidable accidents has immense legal, economic, and moral consequences, it is predetermined when the algorithms are written and built into ADSs. Algorithms are not objective. Rather, they carry the existing biases and discriminations against minority groups in human society, which are reflected and reinforced by the training data used to power the algorithms.Footnote 33 Further, algorithms do not build themselves, so they may carry the values and preferences of people who write or train them.Footnote 34 Therefore, ADS manufacturers are increasing exposed to legal and reputational risks associated with these moral challenges.Footnote 35 Governments have not yet addressed these ethical puzzles posed by ADS algorithms.

B Regulatory Initiatives at National and Transnational Levels

One may ask whether there are existing or emerging international standards that can serve as a reference for domestic regulations. What approaches are regulators in different jurisdictions taking to address these issues? This chapter maps out some representative regulatory initiatives that have taken place at both the national and the transnational level and are respectively backed by public, private, and hybrid institutions – without concrete harmonization.Footnote 36

What are the relevant positions of the governments of these countries in the global value chain of automated vehicles? What are their respective regulatory governance strategies in light of concerns related to economic growth, national security, and business competition?Footnote 37 To what extent are these countries competing (or cooperating) with one another to lead the global standard-setting process in various international arenas?Footnote 38 At the national level, crucial questions have largely been left unaddressed. A leader in regulating ADSs, the United States Department of Transportation (US DoT) has been stocktaking and monitoring current ADS standards development activities, including those led by, inter alia, the SAE International, the International Organization for Standardization (ISO), the Institute of Electrical and Electronics Engineers (IEEE), the Federal Highway Administration (FHWA), the American Association of Motor Vehicle Administrators (AAMVA), and the National Highway Traffic Safety Administration (NHTSA), in relation to issues such as cybersecurity framework, data sharing, functional safety, event data recorders, vehicle interaction, encrypted communications, infrastructure signage and traffic, and testing approaches.Footnote 39 While a couple of initiatives might partly touch upon some issues with ethical implications,Footnote 40 nothing concrete has been designated to address ADSs’ ethical issues. In the United Kingdom, the British Standard Institution published a prestandardization document based on relevant guidelines developed by the UK Department for Transport and Centre for the Protection of National Infrastructure to facilitate further standardization on cybersecurity.Footnote 41 Taiwan also set up a sandbox scheme for the development and testing of vehicles equipped with ADSs,Footnote 42 and the sandbox is open to a broadly defined scope of experimentation, including automobiles, aircraft, and ships, and even a combination of these forms.Footnote 43

Again, none has been initiated to specifically address the ethical issues of ADSs. The world’s firstFootnote 44 concrete government initiative specifically on ADS ethical issues at the moment is the report with twenty ethical rules issued by the Ethics Commission for Automated and Connected Driving, a special body appointed by Germany’s Federal Ministry of Transport and Digital Infrastructure.Footnote 45 The report consists of twenty ethical rules for ADSs.Footnote 46 Of importance are the ethical rules, which ask that “[t]he protection of individuals takes precedence over all other utilitarian considerations,”Footnote 47 that “[t]he personal responsibility of individuals for taking decisions is an expression of a society centred on individual human beings,”Footnote 48 and that “[i]n hazardous situations that prove to be unavoidable, the protection of human life enjoys top priority in a balancing of legally protected interests.”Footnote 49 In particular, Ethical Rule 8 provides that:

Genuine dilemmatic decisions, such as a decision between one human life and another … can thus not be clearly standardized, nor can they be programmed such that they are ethically unquestionable …. Such legal judgements, made in retrospect and taking special circumstances into account, cannot readily be transformed into abstract/general ex ante appraisals and thus also not into corresponding programming activities.Footnote 50

Ethical Rule 9 further prescribes that “[i]n the event of unavoidable accident situations, any distinction based on personal features,” such as age, gender, and physical or mental conditions, “is strictly prohibited.”Footnote 51 While the ethical rules are not mandatory, they certainly mark the first step toward addressing ADSs’ ethical challenges.Footnote 52 It remains to be seen how these ethics rules will be translated into future legislations and regulations in Germany and beyond.Footnote 53

Other relevant initiatives, while not specifically addressing ADS ethical issues, include algorithmic accountability rules (generally applicable to data protection and AI applications) that may inform future regulations. For instance, the European Union’s General Data Protection Regulation (GDPR) sets out rights and obligations in relation to algorithmic explainability and accountability in automated individual decision-making.Footnote 54 The European Commission also established the High-Level Expert Group on Artificial Intelligence in 2018, which published the final version of its Ethics Guidelines for Trustworthy Artificial Intelligence in April 2019.Footnote 55 At the same time, lawmakers in the United States recently tabled a new bill, the Algorithmic Accountability Act of 2019, which intends to require companies to audit systems based on machine learning algorithms, to examine instances of potential bias and discrimination therein, and to fix any issues found in a timely manner.Footnote 56

There have been active and dynamic regulatory initiatives at the transnational level.Footnote 57 The United Nations Economic Council for Europe (UNECE)Footnote 58 and the 1968 Vienna Convention on Road TrafficFootnote 59 have struggled to change the formal rules under their existing framework, given the complexity of the issues, high negotiation costs, and institutional inflexibility.Footnote 60 The Vienna Convention was somewhat passive in the development of driving automated systems until an amendment to its Articles 8 and 39 entered into force in March 2016.Footnote 61 The amendment allows for the transfer of driving tasks from humans to vehicles under certain conditions, lifting the formalistic requirement that a “human” must be in charge of driving tasks.Footnote 62 In September 2018, the UNECE’s Global Forum on Road Traffic Safety (WP.1) adopted a resolution to promote the deployment of vehicles equipped with ADSs in road traffic.Footnote 63 This resolution is rather soft and represents an informal approach to guiding Contracting Parties to the 1968 Vienna Conventions on the safe deployment of ADSs in road traffic.Footnote 64 In any case, because major ADS players like the United States, China, and Japan are not contracting parties to the Vienna Convention, what will be done under the treaty body may not readily generate direct policy relevance and normative influence at the national level (at least for the moment). A few additional private and hybrid organizations have also been engaging in ADS standard-setting, including the SAE International,Footnote 65 the ISO,Footnote 66 and the IEEE.Footnote 67 Among such standard-setting bodies, the SAE International and the ISO are the most comprehensive, cited, and embraced references. Given the complex and dynamic nature of ADS technologies, the SAE International and the ISO, as informal, private/hybrid bodies with more institutional flexibility, have been able to incorporate their members’ expertise to work together in developing common standards – SAE/ISO standards on road vehicle and intelligent transportation systems.Footnote 68 The SAE International further offers the ISO a Secretariat function and services for ISO’s TC204 Intelligent Transport System work.Footnote 69 With its transnational scope, domain expertise, and industry support, the SAE International’s standards, especially the recent clarification and definition of the J3016 standard’s six levels of driving automation, serve as the “most-cited reference” for the ADS industry and governance.Footnote 70 While there has been progress at the transnational level, these regulatory initiatives have yet to touch upon contentious ethical issues that extend beyond the narrower understanding of road safety of ADS.Footnote 71

III Regulatory Autonomy under the World Trade Organization: Technical Standards, Public Morals, and Trade Secrets

As noted, the complex ethical questions, algorithmic designs, and cultural, demographic, and institutional factors may readily be translated into heterogeneous regulatory measures that could increase frictions in international trade and bring about highly contentious issues under the GATT, TBT Agreement, and TRIPS Agreement. These potential frictions beg the questions: How much room in terms of regulatory autonomy will WTO members enjoy in addressing relevant public and moral challenges by conditioning the import and sale of fully autonomous vehicles and dictating the design of ADS algorithms to reflect and respect their local values? What are the normative boundaries set by the relevant covered agreements? Bearing this in mind, this chapter uses the ethical dimensions of ADSs as an example to identify three levels of challenges, in terms of the substance, form, and manner of regulation, for WTO members in regulating this evolving technology.

As the MIT research demonstrated, while a general sense of global moral preference may be identified, there are salient diversities in terms of demographic variations, modern institutions, and cultural underpinnings.Footnote 72 It is therefore likely that some regulators in East Asian countries may adopt technical standards that uphold collective public moral and communal values in their efforts to regulate ADSs. Such technical standards may in turn prevent vehicles whose ADS algorithms (which may be trained with data collected from Western societies or written by programmers who do not embrace similar preferences) do not reflect such local ethics and values from entering the market. For instance, if China requires that ADS algorithms built into fully autonomous vehicles must make decisions about unavoidable crashes based on pedestrians’ “social status” or even their “social credit scores,”Footnote 73 and vehicles that do not run on compliant algorithms will not be allowed in the market, what are the legal and policy implications under the GATT and TBT Agreement? To achieve similar regulatory objectives, WTO members may require ADS manufacturers to disclose their algorithm designs (including source code and training data) to verify and ensure conformity to applicable technical standards. In this case, what boundaries are established in the TRIPS Agreement that may prohibit WTO members from forcing disclosure of trade secrets (or other forms of intellectual property)?

A Public Moral Exception, Technical Regulations, and International Standards

First, import bans on vehicles equipped with ADSs because they are designed and manufactured in a jurisdiction and a manner that reflect a different value set, even if they are reasonable, could violate the national treatment or most favored nation obligations under the GATT. Certainly it would be interesting to see whether vehicles equipped with ADS algorithms that are trained with different data reflecting different cultural and ethical preferences are “like products,”Footnote 74 or whether ADSs with “pet-friendly,” “kids-friendly,” and “elderly-friendly” algorithms are like products. How would diverse consumer morals in a given market influence the determination of likeness? The determination of likeness is “about the nature and extent of a competitive relationship between and among the products at issue,”Footnote 75 and underlying regulatory concerns “may play a role” only if “they are relevant to the examination of certain ‘likeness’ criteria and are reflected in the products’ competitive relationship.”Footnote 76 Given the compliance costs and the distributional role of the global value chain, “even-handed regulation would be found to treat like products less favorably.”Footnote 77 Furthermore, to discipline algorithm designs in terms of how source codes are written and what/how training data are fed, WTO members would need to regulate not only the end product, but also the process and production methods, which remain controversial issues in WTO jurisprudence.Footnote 78 Nevertheless, even if a violation of Article I or III is found, such measures may well be justified under GATT Article XX(a), namely when they are “necessary to protect public morals” and “not applied in a manner which would constitute a means of arbitrary or unjustifiable discrimination between countries where the same conditions prevail, or a disguised restriction on international trade” – the so-called two-tier test.Footnote 79 Most other free trade agreements also contain such a standard exception, allowing parties to derogate from their obligations to protect public morals. Similar clauses can also be found in GATS Article XIV(a)Footnote 80 and TBT Agreement Article 2.2. Further examinations include whether the measures are “designed to protect public morals,”Footnote 81 and whether they are “necessary” based on a weighing and balancing process.Footnote 82 Such a process has been the yardstick of the GATT Article XX necessity test, which is, as reaffirmed by the Appellate Body in China – Publications and Audiovisual Products, “a sequential process of weighing and balancing a series of factors,” including assessing the relative importance of the values or interests pursued by the measure at issue, considering other relevant factors, and comparing the measure at issue with possible alternatives in terms of reasonable availability and trade restrictiveness.Footnote 83 Most importantly, the definition and scope of “public morals” can be highly contentious, and WTO adjudicators have embraced a deferential interpretation:

[T]he term “public morals” denotes standards of right and wrong conduct maintained by or on behalf of a community or nation … the content of these concepts for Members can vary in time and space, depending upon a range of factors, including prevailing social, cultural, ethical and religious values … Members, in applying this and other similar societal concepts, should be given some scope to define and apply for themselves the concepts of “public morals” … in their respective territories, according to their own systems and scales of values.Footnote 84

More recently, the Appellate Body in EC–Seal Products also emphasized that WTO members must be given some scope to define and apply the idea of “public morals” pursuant to their own systems and values.Footnote 85 Given this deferential approach, WTO members appear to enjoy ample leeway in defining and applying public moral-based measures according to their own unique social systems and communal values.

Further, because the TBT Agreement cumulatively applies in conjunction with the GATT, an ADS regulatory measure that is justified may still violate the TBT Agreement, which similarly contains nondiscrimination obligations but lacks public moral exceptions. According to Trachtman, “the scope of the TBT national treatment requirement has been interpreted somewhat narrowly compared to that of GATT, excluding from violation measures that ‘stem exclusively from a legitimate regulatory distinction,’ in order to avoid invalidating a broader scope of national technical regulations than the GATT.”Footnote 86 Under Articles 2.1 and 2.2 of the TBT Agreement, ADS regulatory measures are required to be sufficiently “calibrated” to different conditions in different areas, and to not be “more trade-restrictive than necessary to fulfill a legitimate objective, taking account of the risks non-fulfillment would create.”Footnote 87 That is, similarly to the jurisprudence in the GATT, a holistic weighing and balancing process taking into account the degree of contribution, levels of trade restrictiveness, and the risks of non-fulfillment of the stated objectives as well as a comparison with possible alternatives are mandated.Footnote 88 As will be demonstrated next, the necessity of regulatory measures that focus on mandatory disclosure of source codes and training data (both the substance and form of the regulation) may be fiercely challenged; at the same time, locating a reasonably available alternative can be equally problematic.

Given the transnational regulatory initiatives, Article 2.4 of the TBT Agreement also plays a crucial role here. WTO members are required to use the standards developed by the SAE/ISO and UNECE (so long as they are “relevant international standards”) as the bases for domestic regulations unless such standards cannot effectively or appropriately fulfill the legitimate objective of protecting public morals in the ADS issue area.Footnote 89 While this may impose certain (albeit weak) restrictions on the regulatory autonomy and flexibility of WTO members when designing and imposing their ADS algorithm rules and standards in the ethical dimension,Footnote 90 the implausible (if not impossible) global consensus on ethical decision-making means that such international standards remain far out of reach. In the long run, there might be more and more initiatives of international standards in this regard, potentially resulting in concerns over the structure, process, and participation in a standard-setting body as well as political confrontations at the TBT Committee.Footnote 91

B Automated Driving System Algorithms, Source Codes, and Training Data as “Undisclosed Information” under the TRIPS Agreement

Even if the substance of ADS regulatory measures does not violate existing obligations under the GATT and TBT Agreement, WTO members may require ADS manufacturers to disclose their algorithms designs, source code, and training data to verify compliance and achieve their regulatory objectives. If WTO members force ADS vehicle manufacturers or programmers to disclose their trade secrets – proprietary algorithm designs, source codes, and training data – can they survive the test of the TRIPS Agreement? To be sure, entities that own ADS algorithms can seek protection via various channels including patents, copyrights, and trade secrets.Footnote 92 However, the commercial practice in the ADS field (and many other AI applications) has been to hold both source code and training data as trade secrets to maximize the protection of interests and to remain competitive in the market.Footnote 93

Article 39.1 of the TRIPS Agreement requires members, when “ensuring effective protection against unfair competition as provided in Article 10bis of the Paris Convention (1967),” to “protect undisclosed information in accordance with paragraph 2.”Footnote 94 Article 39.2 further provides that information – when it is secret (not “generally known among or readily accessible to persons within the circles that normally deal with the kind of information in question”), has commercial value, and is controlled by the lawful custodian – shall be protected “from being disclosed to, acquired by, or used by others without their consent in a manner contrary to honest commercial practices.”Footnote 95 This requires WTO members to provide minimum protections for undisclosed information, recognized in Article 1.2 as a category of intellectual property,Footnote 96 in accordance with the conditions and criteria provided in Article 39.2.Footnote 97

Article 39 does not explicitly prohibit members from promulgating laws, consistent with other provisions of the TRIPS Agreement, to allow lawful disclosure or create exceptions where trade secrets may lawfully be forced to be disclosed. Yet what may constitute a lawful disclosure under the TRIPS Agreement can also be controversial. Can members promulgate any law that requires disclosure of trade secrets to serve certain regulatory objectives? Are all measures regulating ADSs and requiring disclosure of source code and training data for conformity assessment lawful and consistent with the TRIPS Agreement? There has been no case law related to Article 39, but the fact that the United States’ proposal to include “theft, bribery, [and] espionage” of secrets in “a manner contrary to honest commercial practices”Footnote 98 was rejected in the negotiation process indicates that what may constitute a lawful disclosure can also prove contentious.Footnote 99 A contextual reading of TRIPS Agreement Articles 7 and 8 suggests that “Members may … adopt measures necessary to … promote the public interest in sectors of vital importance to their socio-economic and technological development,”Footnote 100 and “a balance of rights and obligations”Footnote 101 is called for, but such measures cannot “unreasonably restrain trade.”Footnote 102 The scope of disclosure, the regulated entities, the manner of disclosure, and enforcement and safeguard may therefore be crucial factors in determining consistency. In this sense, in China’s social credit scenario, a limited approach that requires essential source code and training data (from companies that program the algorithms making ethical decisions, instead of all of the actors along the global ADS supply chain) to be disclosed to an expert committee (or similar institutional designs)Footnote 103 for review and certification, rather than a wholesale, systematic forced disclosure, may appear to be more TRIPS-consistent. Additional safeguards that prohibit government agencies from sharing disclosed proprietary information with others may also help to avoid inappropriate forced technology transfers, unfair competition, and unfair commercial use.Footnote 104 Relatedly, some recent megaregional free trade agreements (mega-FTAs) have included provisions that explicitly prevent governments from demanding access to an enterprise’s proprietary software source code.Footnote 105 Demands for stronger protection of source code and training data and limitations on governments’ regulatory room for maneuver are likely to grow in the age of AI.

C “Algorithmic Black Box” and the Limits of Regulatory Measures

An additional layer of regulatory challenge that may prevent the effectiveness (therefore necessity) of these measures stems from the technological nature of machine/deep learning algorithms – its opaque characteristic, or as criticized by a leading commentator, the “black box” problem.Footnote 106 This problem refers to the complexity and secrecy of algorithm-based (especially deep learning-based) decision-making processes, which frustrates meaningful scrutiny and regulation. Without understanding and addressing the black box challenge, it may be unrealistic to rely on disclosure or source codes as a regulatory approach. The black box problem can further be disentangled into “legal black box” and “technical black box.”Footnote 107 The “legal black box” is opaque because of the proprietary status of complex statistical models or source codes, as they are legally protected by trade secret laws.Footnote 108 Regulatory measures focusing on forced disclosure are one way to fix such black box problems by unpacking the algorithms therein to secure a certain level of compliance.

However, the “technical black box,” which arises in applications based on machine/deep learning algorithms, is much more problematic.Footnote 109 A technically inherent lack of transparency persists as decisions and classifications emerge automatically in ways that no one – even the programmers themselves – can adequately explain in human-intelligible terms why and how certain decisions and classifications are reached.Footnote 110 There exists “no well-defined method to easily interpret the relative strength of each input and to each output in the network” due to the highly nonlinear technological characteristic.Footnote 111 Therefore, the measures that are limited to legally forced disclosure can hardly address this technical black box problem. Even if the regulator forces ADS manufacturers to disclose source codes and algorithm designs, the level of compliance may not be effectively ascertained and evaluated. Because of this technical black box problem, regulatory measures designed to disclose source codes and ensure compliance with ethical rules on ADSs (hence the rational nexus between regulatory means and objectives) may be significantly frustrated.

IV Conclusion

ADSs promise to transform modern transportation, conventional division of labor, social interactions, and provision of services. However, when vehicles equipped with different levels of ADSs enter the market, a range of regulatory issues should be addressed. In particular, the ethical puzzles pose formidable and multifaceted challenges to governments to act individually and collectively in delivering good ADS governance. As analyzed by this chapter, complex ethical questions, algorithmic designs, and cultural, demographic, and institutional factors may readily be translated into heterogeneous regulatory measures that could increase frictions in international trade and bring about highly contentious issues in the WTO. This chapter used ADS ethics as a vantage point to identify and unpack three levels of challenges WTO members may face in addressing public moral issues by conditioning the import and sale and dictating the design of ADS to reflect and respect their local values. These challenges may well translate into a regulatory dilemma for WTO members. Premised upon a review of regulatory initiatives at national and transnational levels, this chapter not only identified the normative boundaries set by the relevant WTO-covered agreements but also highlighted the inherent limitations of potential regulatory measures due to the technological nature of AI,Footnote 112 which call for a reconceptualization of the forms and substances of regulations on such evolving technology.

Footnotes

* The author would like to thank Chia-Chi Chen, I-Ching Chen, Mao-wei Lo, and Si-Wei Lu for their research assistance. Any remaining errors are the author’s sole responsibility.

1 Various terms are used to refer to vehicles equipped with different levels of driving automation systems (a generic term that covers all levels of automation), such as self-driving cars, unmanned vehicles, and automated vehicles. However, as explained in Section II, the inconsistent and sometimes confusing use of terms may lead to regulatory misconceptions. This chapter uses “automated driving systems” to cover level 3–5 systems according to the most widely recognized classification by SAE International. See also Peng’s Chapter 6 in this volume.

2 “Autonomous Vehicle Market Outlook – 2026’ (2018), https://perma.cc/9B5S-GYRE.

3 “IHS Clarifies Autonomous Vehicle Sales Forecast – Expects 21 Million Sales Globally in the Year 2035 and Nearly 76 Million Sold Globally Through 2035’ (IHS Markit, 9 June 2016), https://perma.cc/77J7-VQ56.

4 More specifically, AI algorithms and sensing technologies help to draw a real-time, three-dimensional map of the environment (a 60-meter range around the vehicle), monitor surrounding activities, navigate and operate (e.g., speed, brake, steer, and change gear selection) the vehicle. See Autonomous Vehicle Market Outlook – 2026, Footnote note 2 above. See also HY Lim, Autonomous Vehicles and the Law: Technology, Algorithms and Ethics (Cheltenham, Edward Elgar Publishing, 2019), at 519.

5 See K Kokalitcheva, “Toyota Becomes Uber’s Latest Investor and Business Partner” (Fortune, 24 May 2016), https://perma.cc/254A-7HSX.

6 See K Korosec, “Autonomous Car Sales Will Hit 21 Million by 2035, IHS Says” (Fortune, 7 June 2016), https://perma.cc/4HEX-MHJT.

7 For example, the United States government announced in 2016 its $4 billion investment in automated vehicles. See B Vlasic, “U.S. Proposes Spending $4 Billion on Self-Driving Cars” (New York Times, 14 January 2016), https://perma.cc/36DJ-QKMQ.

8 See A Taeihagh and HSM Lim, “Governing Autonomous Vehicles: Emerging Responses for Safety, Liability, Privacy, Cybersecurity, and Industry Risks” (2018) 39(1) Transport Reviews 103, at 107109; S Nyholm and J Smids, “The Ethics of Accident-Algorithms for Self-Driving Cars: An Applied Trolley Problem?” (2016) 19(5) Ethical Theory & Moral Practice 1275, at 12751289.

9 See the discussion in Section II.

10 In addition, the respective regulatory governance strategies of these countries may change and adapt in light of ongoing economic growth, national security, and business competition issues. Their regulatory endeavors, as well as competition (or cooperation), may also lead to a more coherent global standard-setting process in international arenas. See generally H-W Liu, “International Standards in Flux: A Balkanized ICT Standard-Setting Paradigm and Its Implications for the WTO” (2014) 17(3) Journal of International Economic Law 551; M Du, “WTO Regulation of Transnational Private Authority in Global Governance” (2018) 67(4) International and Comparative Law Quarterly 867.

11 In some cases, the General Agreement on Trade in Services (GATS) may come into play, especially when most ADSs do not fall squarely into either “goods” or “services” in light of the increasing “servitization” of modern manufacturing. See E Lafuente et al., “Territorial Servitization and the Manufacturing Renaissance in Knowledge-Based Economies” (2019) 53(3) Regional Studies 313; T Baines et al., “Servitization of the Manufacturing Firm: Exploring the Operations Practices and Technologies That Deliver Advanced Services” (2014) 34(1) International Journal of Operations & Production Management 2; G Lay (ed.), Servitization in Industry (New York, Springer, 2014). The discussion on service under the GATS is beyond the scope of this chapter, the primary focus of which lies in product-oriented standards and rules.

12 See SAE International, J3016_201806: Surface Vehicle Recommended Practice: (R) Taxonomy and Definitions for Terms Related to Driving Automation Systems for On-Road Motor Vehicles (first issued in January 2014, and revised in June 2018 to supersede J3016, adopted in September 2016) (hereinafter SAE International J3016_201806). This definition and taxonomy is embraced by the United States Department of Transportation (US DoT) and the National Highway Traffic Safety Administration (NHTSA); see US DoT, “Preparing for the Future of Transportation: Automated Vehicles 3.0’ (2018), https://perma.cc/E4WY-AMN3, at 45.

13 Footnote Ibid., at 28.

16 According to the US DoT and NHTSA’s estimation, around 90 percent of car accidents are the result of human error. See US DoT and NHTSA, “Traffic Safety Facts: A Brief Statistical Summary – Critical Reasons for Crashes Investigated in the National Motor Vehicle Crash Causation Survey” (2015), https://perma.cc/JV6M-TC3M. The advent of ADSs may help reduce or even eliminate this human error factor, as these systems promise to outperform human drivers. See Taeihagh and Lim, Footnote note 8 above, at 107–109. See also Y Sun et al., “Road to Autonomous Vehicles in Australia: An Exploratory Literature Review” (2017) 26(1) Road and Transport Research: A Journal of Australian and New Zealand Research and Practice 34, at 3447.

17 See, for example, A von Ungern-Sternberg, “Autonomous Driving: Regulatory Challenges Raised by Artificial Decision-Making and Tragic Choices,” in W Barfield and U Pagallo (eds), Research Handbook on the Law of Artificial Intelligence (Cheltenham, Edward Elgar Publishing, 2018), at 253254; and Taeihagh and Lim, Footnote note 8 above, at 107–109.

18 “After all, humans can be amazing drivers, the performance of advanced automation systems is still unclear … and automation shifts some errors from driver to designer.” BW Smith, “Human Error as a Cause of Vehicle Crashes” (Centre for Internet and Society, 18 December 2013), https://perma.cc/VN5B-SST4.

19 See generally Lim, Footnote note 4 above.

20 See, for example, DM West, “Moving Forward: Self-Driving Vehicles in China, Europe, Japan, Korea, and the United States” (2016), https://perma.cc/8SWG-GX2Y; V Dhar, “Equity, Safety, and Privacy in the Autonomous Vehicle Era” (2016) 49(11) Computer 80, at 80–83; JM Anderson et al., “Autonomous Vehicle Technology: A Guide for Policymakers” (2014), https://perma.cc/5FBA-UVRQ; FD Page and NM Krayem, “Are You Ready for Self-Driving Vehicles?” (2017) 29(4) Intellectual Property and Technology Law Journal 14.

21 See J Boeglin, “The Costs of Self-Driving Cars: Reconciling Freedom and Privacy with Tort Liability in Autonomous Vehicle Regulation” (2015) 17(1) Yale Journal of Law and Technology 171, at 176185; M Gillespie, “Shifting Automotive Landscapes: Privacy and the Right to Travel in the Era of Autonomous Motor Vehicles” (2016) 50 Washington University Journal of Law and Policy 147, at 147169. See also DJ Glancy, “Privacy in Autonomous Vehicles” (2012) 52(4) Santa Clara Law Review 1171; J Schoonmaker, “Proactive Privacy for a Driverless Age” (2016) 25(2) Information & Communications Technology Law 96; S Gambs et al., “De-anonymization Attack on Geolocated Data” (2014) 80(8) Journal of Computer and System Sciences 1597.

22 See SA Bhatti, “Automated Vehicles: Challenges to Full Scale Deployment” (Wavelength, 26 September 2019), https://perma.cc/5J8G-3B4V.

23 See JP Trachtman, “The Internet of Things Cybersecurity Challenge to Trade and Investment: Trust and Verify?” (2019), https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3374542.

24 See, for example, I Coca-Vila, “Self-Driving Cars in Dilemmatic Situations: An Approach Based on the Theory of Justification in Criminal Law” (2018) 12(1) Criminology Law & Philosophy 59; see also FS de Sio, “Killing by Autonomous Vehicles and the Legal Doctrine of Necessity” (2017) 20(2) Ethical Theory and Moral Practice 411.

25 See generally Philippa Foot, “The Problem of Abortion and the Doctrine of the Double Effect,” in Virtues and Vices (Oxford, Basil Blackwell, 1978) (originally appeared in Oxford Review 5, 1967).

26 See K Hao, “Should a Self-Driving Car Kill the Baby or the Grandma? Depends on Where You’re from” (MIT Technology Review, 2018), https://perma.cc/K69S-V8H6.

27 E Awad et al., “The Moral Machine Experiment” (2018) 563 Nature 59.

29 Footnote Ibid., at 62–63.

31 See Coca-Vila, Footnote note 24 above, at 62–66.

32 One commentator also notes that the Trolley Problem and ethical principles might play a less decisive role than predictive legal liabilities that readily translate into monetary constraints on ADS manufacturers that are driven by profits. See B Casey, “Amoral Machines, or: How Roboticists Can Learn to Stop Worrying and Love the Law” (2017) 111 Northwestern University Law Review 231.

33 See J Kleinberg et al., “Discrimination in the Age of Algorithms” (2018) 10 Journal of Legal Analysis 1, at 4.

35 See A Hevelke and J Nida-Rümelin, “Responsibility for Crashes of Autonomous Vehicles: An Ethical Analysis” (2015) 21(3) Science and Engineering Ethics 619, at 619630; and JM Tien, “The Sputnik of Servgoods: Autonomous Vehicles” (2017) 26(2) Journal of Systems Science and Systems Engineering 133, at 133162.

36 See generally H-W Liu and C-F Lin, “Artificial Intelligence and Global Trade Governance: Towards A Pluralist Agenda” (2020) 61 Harvard International Law Journal 407 .

37 See, for example, Liu, Footnote note 10 above.

38 See generally Du, Footnote note 10 above.

39 See US DoT, Footnote note 12 above, at 57–63.

40 Footnote Ibid., at 60.

41 See British Standard Institution, PAS 1885:2018: The Fundamental Principles of Automotive Cyber Security (December 2018); see also United Kingdom Department for Transport, Centre for Connected and Autonomous Vehicles, and Centre for the Protection of National Infrastructure, “The Key Principles of Cyber Security for Connected and Automated Vehicles” (2017), www.gov.uk/government/publications/principles-of-cyber-security-for-connected-and-automated-vehicles/the-key-principles-of-vehicle-cyber-security-for-connected-and-automated-vehicles.

42 Unmanned Vehicles Technology Innovative Experimentation Act (Taiwan) (UV Act). The UV Act was promulgated on 19 December 2018.

43 UV Act, Art. 3.

44 See Taeihagh and Lim, Footnote note 8 above, at 10.

45 See “Federal Ministry of Transport and Digital Infrastructure, Ethics Commission: Automated and Connected Driving” (2017), https://perma.cc/YQ8S-KTE9 (hereinafter 2017 Germany Ethical Commission Report); see also C Lütge, “The German Ethics Code for Automated and Connected Driving” (2017) 30(4) Philosophy and Technology 547.

46 2017 Germany Ethical Commission Report, Footnote note 45 above.

47 2017 Germany Ethical Commission Report, at 6–9 (“Ethical Rules for Automated and Connected Vehicular Traffic”), Rule 2.

48 Footnote Ibid., Rule 4.

49 Footnote Ibid., Rule 7.

50 Footnote Ibid., Rule 8.

51 Footnote Ibid., Rule 9.

52 See Taeihagh and Lim, Footnote note 8 above, at 10.

53 See Lütge, Footnote note 45 above, at 557.

54 Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation, GDPR), Arts. 21 and 22.

55 European Commission, “Building Trust in Human-Centric AI, Ethics Guidelines for Trustworthy AI,” https://perma.cc/M2WL-NL24.

56 Algorithmic Accountability Act of 2019, OLL19293, 116th Congress (2019).

57 For a review of such transnational regulatory initiatives and their normative ramifications, see Liu and Lin, Footnote note 36 above, at 440–450.

58 United Nations Economic Council for Europe (hereinafter UNECE), Economic and Social Council, Inland Transportation Committee, Working Party on Road Traffic Safety, U.N. Doc. ECE/TRANS/WP.1/145 (24–26 March 2014); UNECE, “UNECE Paves the Way for Automated Driving by Updating UN International Convention” (23 March 2016), https://perma.cc/7PNX-2GA4.

59 1968 Vienna Convention on Road Traffic (78 Parties) and the March 2014 Amendment, https://perma.cc/5C8K-Y3ST.

60 See Liu and Lin, Footnote note 36 above, at 410–411.

62 UNECE, “Report of the Sixty-Eighth Session of the Working Party on Road Traffic Safety” (2014), https://perma.cc/JZ3Q-PM62.

63 UNECE, “Report of the Global Forum for Road Traffic Safety on Its Sixty-Seventh Session” (2014), https://perma.cc/RC99-WAXQ (Annex 1, Global Forum for Road Traffic Safety (WP.1) Resolution on the Deployment of Highly and Fully Automated Vehicles in Road Traffic).

64 See Liu and Lin, Footnote note 36 above, at 427–428.

65 SAE International J3016_201806, Footnote note 12 above.

66 International Organization for Standardization, “ISO 26262 Road Vehicles Functional Safety,” https://perma.cc/L4DL-4V97; ISO, “Intelligent Transport Systems–Taxonomy and Definitions for Terms Related to Driving Automation Systems for On-Road Motor Vehicles, ISO/SAE NP PAS 22736” (hereinafter ISO/SAE NP PAS 22736), https://perma.cc/BW2M-SVQK.

67 IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems (IEEE Global Initiative) has launched “Ethically Aligned Design: A Vision for Prioritizing Human Well-Being with Autonomous and Intelligent Systems,” https://perma.cc/BQH5-HGHN.

68 ISO/SAE NP PAS 22736, Footnote note 66 above.

69 See J Pokrzywa, “SAE Global Ground Vehicle Standards” (2019), https://perma.cc/9BV6-LBVQ.

70 See J Shuttleworth, “SAE Standards News: J3016 Automated-Driving Graphic Update” (2019), https://perma.cc/6STW-BXJF. See also Liu and Lin, Footnote note 36 above, at 427.

71 At this moment, it appears challenging to reach multilateral consensus on controversial issues of ADS ethics. As some regulatory initiatives will likely be designed to pursue diverse policy objectives reflecting local values and moral preferences, there may be growing competition among countries.

72 Awad et al., Footnote note 27 above, at 62–63.

73 For an in-depth discussion of China’s social credit system and its impact on social and economic activities, see generally Y-J Chen et al., “‘Rule of Trust’: The Power and Perils of China’s Social Credit Megaproject” (2018) 32(1) Columbia Journal of Asian Law 1.

74 Appellate Body Report, European Communities – Measures Affecting Asbestos and Asbestos-Containing Products, WT/DS135/AB/R (5 April 2001) [EC–Asbestos], para. 99.

75 Footnote Ibid. See also Appellate Body Report, United States – Measures Affecting the Production and Sale of Clove Cigarettes, WT/DS406/AB/R (24 April 2012) [US–Clove Cigarettes], para. 120.

76 Footnote Ibid. Arguably, this market-oriented approach systematically excludes the bases for regulatory distinctions. See JP Trachtman, “WTO Trade and Environment Jurisprudence: Avoiding Environmental Catastrophe” (2017) 58(2) Harvard International Law Journal 273, at 277281.

77 See Trachtman, Footnote note 23 above, at 20.

79 GATT, art. XX(a) and chapeau. See Appellate Body Report, United States – Standards for Reformulated and Conventional Gasoline, WT/DS2/AB/R (20 May 1996) [US–Gasoline], at 22; see also Appellate Body Report, United States – Import Prohibition of Certain Shrimp and Shrimp Products, WT/DS58/AB/R (6 November 1998) [US–Shrimp], paras. 119–120; Appellate Body Report, Brazil – Measures Affecting Imports of Retreaded Tyres, WT/DS332/AB/R (17 December 2017) [Brazil–Retreaded Tyres], para. 139.

80 As noted, however, the discussion on service under the GATS is beyond the scope of this chapter.

81 Appellate Body Report, Colombia – Measures Relating to the Importation of Textiles, Apparel and Footwear, WT/DS461/AB/R (22 June 2016) [Colombia–Textiles], paras. 5.67–5.70.

82 Appellate Body Report, China – Measures Affecting Trading Rights and Distribution Services for Certain Publications and Audiovisual Entertainment Products, WT/DS363/AB/R (19 January 2010) [China–Publications and Audiovisual Products], paras. 239 and 242.

83 Footnote Ibid., paras. 300–311, 326–327.

84 Panel Report, China – Publications and Audiovisual Products, WT/DS363/R (19 January 2010), paras. 7.759 and 7.763; see also Panel Report, United States – Measures Affecting the Cross-Border Supply of Gambling and Betting Services, WT/DS285/R (7 April 2005) [US–Gambling], paras. 6.461 and 6.465.

85 See Appellate Body Report, European Communities – Measures Prohibiting the Importation and Marketing of Seal Products, WT/DS401/AB/R (18 June 2014) [EC–Seal Products], paras. 5.200–5.201. Indeed, WTO members and their societies “are not homogenous, either in their domestic political structures or in their ethical, moral, or religious beliefs.” R Howse et al., “Pluralism in Practice: Moral Legislation and the Law of the WTO After Seal Products” (2015) 48 George Washington International Law Review 81, at 85.

86 Trachtman, Footnote note 23 above, at 21 (citing Appellate Body Report, United States – Measures Affecting the Production and Sale of Clove Cigarettes, WT/DS406/AB/R (24 April 2012), paras. 96–102).

87 TBT Agreement, Arts. 2.1 and 2.2. See Appellate Body Report, United States – Measures Concerning the Importation, Marketing and Sale of Tuna and Tuna Products, Recourse to Article 21.5 of the DSU by Mexico, WT/DS381/AB/RW (3 December 2015), para. 284.

88 Appellate Body Report, United States – Measures Concerning the Importation, Marketing and Sale of Tuna and Tuna Products, WT/DS381/AB/R (13 June 2012) [US–Tuna], at 320, 322.

89 TBT Agreement, Art. 2.4.

90 See Trachtman, Footnote note 23 above, at 22.

91 See Liu and Lin, Footnote note 36 above, at 411, 429–430, 446–447.

92 See generally SK Katyal, “The Paradox of Source Code Secrecy” (2019) 104 Cornell Law Review 101.

93 Footnote Ibid., at 145–146.

94 TRIPS Agreement, Art. 39.1.

95 TRIPS Agreement, Art. 39.2.

96 TRIPS Agreement, Art. 1.2. See World Intellectual Property Organization (WIPO), Introduction to Intellectual Property: Theory and Practice (2nd ed., Alphen aan den Rijn,Wolters Kluwer, 2017), at 243246. See NP de Carvalho, The TRIPS Regime of Antitrust and Undisclosed Information (Alphen aan den Rijn, Kluwer Law International, 2008), at 189190.

97 See J Malbon et al., The WTO Agreement on Trade-Related Aspects of Intellectual Property Rights: A Commentary (Cheltenham, Edward Elgar Publishing, 2014), at 577.

98 Negotiating Group on Trade-Related Aspects of Intellectual Property Rights, including Trade in Counterfeit Goods (1990), Status of Work in the Negotiating Group: Chairman’s Report to the GNG, MTN.GNG/NG11/W/76, Part III, s. 7. 1 A.2.

99 Malbon et al., Footnote note 97 above, at 579.

100 TRIPS Agreement, Art. 8.1.

101 TRIPS Agreement, Art. 7.

102 TRIPS Agreement, Art. 8.2.

103 See F Pasquale, The Black Box Society: The Secret Algorithms That Control Money and Information (Cambridge, MA, Harvard University Press, 2015), at 160161; see also F Pasquale, “Beyond Innovation and Competition: The Need for Qualified Transparency in Internet Intermediaries” (2010) 104 Northwestern University Law Review 105.

104 For instance, China has been accused of forcing foreign companies to disclose sensitive technical data and proprietary source code via a series of administrative processes as a necessary step for market entry, and such data and source code could be passed to domestic competitors. See L Wei and B Davis, “How China Systematically Pries Technology from U.S. Companies” (Wall Street Journal, 26 September 2018), https://perma.cc/ZCV4-DHTK; JY Qin, “Forced Technology Transfer and the US-China Trade War: Implications for International Economic Law,” Wayne State University Law School Research Paper No. 201961 (5 October 2019), 3–4.

105 See, for example, Comprehensive and Progressive Agreement for Trans-Pacific Partnership (CPTPP), Art. 14.17.

106 See generally Pasquale, Footnote note 104 above; and F Pasquale, “Secret Algorithms Threaten the Rule of Law” (MIT Technology Review, 1 July 2017), https://perma.cc/6UYB-86VD.

107 See generally H-W Liu et al., “Beyond State v. Loomis: Artificial Intelligence, Government Algorithmization, and Accountability” (2019) 27(2) International Journal of Law and Information Technology 122.

110 See Footnote ibid. See JV Tu, “Advantages and Disadvantages of Using Artificial Neural Networks versus Logistic Regressions for Predicting Medical Outcomes” (1996) 49 (11) Journal of Clinical Epidemiology 1225; M Aikenhead, “The Uses and Misuses of Neural Networks in Law” (1996) 12(1) Santa Clara Computer and High Technology Law Journal 31, at 33; and P Margulies, “Surveillance by Algorithms: The NSA, Computerized Intelligence Collection, and Human Rights” (2016) 68 Florida Law Review 1045, at 1069.

111 See L Zhou et al., “A Comparison of Classification Methods for Predicting Deception in Computer-Mediated Communication” (2004) 20(4) Journal of Management Information Systems 139, at 150151.

112 See generally MU Scherer, “Regulating Artificial Intelligence Systems: Risks, Challenges, Competencies, and Strategies” (2016) 29(2) Harvard Journal of Law & Technology 353.

Save book to Kindle

To save this book to your Kindle, first ensure coreplatform@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×