To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure email@example.com
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Since the issuance of a joint statement in January 2019, seventy-eight World Trade Organization (WTO) members have confirmed their intention to commence WTO negotiations on trade-related aspects of electronic commerce. There is a growing expectation that a new agreement on trade-related aspects of electronic commerce (TREC Agreement) will be adopted in the not-so-distant future. One key question that has been left out in the process of negotiating the TREC Agreement is how disputes concerning electronic commerce should be settled. This chapter points out that digital trade disputes arising under the proposed TREC Agreement will likely differ from conventional trade disputes arising under the WTO agreements in terms of the diversity of stakeholders and the nature of the balance between trade and non-trade values and that the rules and procedures of the WTO Dispute Settlement Understanding (DSU) may not properly apply to the former. It argues that special or additional dispute settlement rules and procedures should be incorporated into the TREC Agreement to fill those gaps in the existing DSU with regard to the handling of digital trade disputes.
This chapter surveys a number of regulatory interventions through which governments seek to enhance domestic companies’ access to data: mandatory data sharing requirements (as under the EU’s new financial services regulations), data transfer restrictions (as under India’s draft ecommerce policy), and open data initiatives (as under Singapore’s ‘smart nation’ initiative) – all seek to make more data available with the aim of spurring innovation and growth in the AI economy. Such measures are indirectly affected by existing and newly emerging rules of international economic law. International investment law is likely to be mobilized in defense against governments that seek to mandate data sharing from private data holders, while new rules on “digital trade” are meant to ensure transnational data mobility. In sum, international economic law regulates data in favor of data-holders’ ability to retain control over data location and use and constrains states’ ability to confront asymmetric control over data.
Law-enforcement agencies are increasingly able to leverage crime statistics to make risk predictions for particular individuals, employing a form of inference that some condemn as violating the right to be “treated as an individual.” I suggest that the right encodes agents’ entitlement to a fair distribution of the burdens and benefits of the rule of law. Rather than precluding statistical prediction, it requires that citizens be able to anticipate which variables will be used as predictors and act intentionally to avoid them. Furthermore, it condemns reliance on various indexes of distributive injustice, or unchosen properties, as evidence of law-breaking.
Customs surveillance of intellectual property is an efficient way to quickly and effectively provide legal protection to the right-holder, as it allows to nip the infringements in the bud. Technology has drastically changed the means and mechanisms of customs enforcement, as it increases the possibilities of identifying and detaining goods infringing IPRs, and makes it more feasible to assess in advance where control is required.
However, assessing in advance and acting when appropriate does not always match well with fundamental intellectual property principles (territoriality), global trade norms (freedom of transit), global intellectual property rules, and due process requirements.
This chapter explores some of the challenges and opportunities brought by AI, big data and distributed ledger technologies to customs enforcement of IPRs. It looks at AI’s transformative influence on IP enforcement and the digitization and use of big data in customs control.
Human rights do remain valid currency in how we approach planetary-scale computation and accompanying data flows. Today’s system of human rights protection, however, is highly dependent on domestic legal institutions, which unravel faster than the reconstruction of fitting transnational governance institutions.
The chapter takes a critical look at the construction of the data flow metaphor as a policy concept inside international trade law. Subsequently, it explores how the respect for human rights ties in with national constitutionalism that becomes increasingly challenged by the transnational dynamic of digital era transactions.
Lastly, the chapter turns to international trade law and why its ambitions to govern cross-border data flows will likely not advance efforts to generate respect for human rights. In conclusion, the chapter advocates for a rebalancing act that recognizes human rights inside international trade law.
Artificial intelligence is an emerging topic in intellectual property protection. The chapter starts by useful definitions of big data and AI and explores some of the work done on AI and big data in the WTO and in particular under the Agreement on Trade-Related Intellectual Property Rights (TRIPS). The chapter then asks what needs to be done to adapt IP law to meet the challenge of big data and AI, by looking at distinct provisions of TRIPS.
One of the biggest, newest and most exciting assessment and research opportunity to occur since the millennium has been the exploitation of Big Data, which is the ‘electronic footprint’ that we all leave when using credit and other cards as well as the web, through a variety of social networks. Assessment, selection and recruitment experts have not been slow in seeking Big Data as a way of collecting a wide variety of pieces of information about targeted individuals. There have also been some high-profile scandals using Big data. This chapter looks at the five Vs of Big data: Volume (how much data on individuals is potentially available), Variety (the wide range of data on behaviours available), Velocity (the sheer speed of data accumulation and possibilities of analysis), Veracity (the all-important point of the accuracy and truthfulness of the data) and Value (whether it is uniquely valuable or not). Studies on Facebook profiles are discussed in detail. It is perhaps the most exciting prospect for person assessment, but the promises, perils and problems are also discussed. Finally, half a dozen experts report on how they see Big Data as offering opportunities for person assessment.
This chapter looks at the future of people assessment. Like many other areas of business there have been many, and rapid, technology-led changes. There are questions about who are or should be assessed; when and how they are assessed; the cost and legal changes in assessment; and how data is stored. The quiet world of academic-led assessment and testing has been ‘invaded’ by people in business eager to sell psychological testing and assessment to a much larger market. Inevitably there are enthusiasts and sceptics: the former claiming how AI computer and neuro-science technology will revolutionise the ease, cost and accuracy of assessment, while the sceptics argue there is still very little evidence for these claims. It certainly is a ‘good time to be alive’ for those interested in people assessment.
Over the years, there has been more and more research to test the validity of personnel assessment methods, an area which is far from easy. This book compares traditional practices against new techniques, including social media analytics, wearables, mobile phone logs, and gamification. Researchers and businesses alike know the importance of making good, and avoiding bad, selection decisions, but are unsure of how to proceed effectively. This book maps out the viable options and advises on best practice. The author combines both practical applications and academic, psychological research to explain how each method works, the theory behind it, and the extent of the evidence that supports it.
There is growing interest in machine ethics in the question of whether and under what circumstances an artificial intelligence would deserve moral consideration. This paper explores a particular type of moral status that the author terms psychological moral patiency, focusing on the epistemological question of what sort of evidence might lead us to reasonably conclude that a given artificial system qualified as having this status. The paper surveys five possible criteria that might be applied: intuitive judgments, assessments of intelligence, the presence of desires and autonomous behavior, evidence of sentience, and behavioral equivalence. The author suggests that despite its limitations, the latter approach offers the best way forward, and defends a variant of that, termed the cognitive equivalence strategy. In short, this holds that an artificial system should be considered a psychological moral patient to the extent that it possesses cognitive mechanisms shared with other beings such as nonhuman animals whom we also consider to be psychological moral patients.
Embodiment is typically given insufficient weight in debates concerning the moral status of Novel Synthetic Beings (NSBs) such as sentient or sapient Artificial Intelligences (AIs). Discussion usually turns on whether AIs are conscious or self-aware, but this does not exhaust what is morally relevant. Since moral agency encompasses what a being wants to do, the means by which it enacts choices in the world is a feature of such agency. In determining the moral status of NSBs and our obligations to them, therefore, we must consider how their corporeality shapes their options, preferences, values, and is constitutive of their moral universe. Analysing AI embodiment and the coupling between cognition and world, the paper shows why determination of moral status is only sensible in terms of the whole being, rather than mental sophistication alone, and why failure to do this leads to an impoverished account of our obligations to such NSBs.
We are moving at a fast pace towards the era of machines that are in charge of moral decisions, as in the case of self-driving cars. By reviewing the accident with the Uber self-driving car in Arizona in 2018, this chapter discusses the complexities of assigning responsibilities when such an accident occurs as a result of a joint decision between human and machine, begging the question: Can we ascribe any form of responsibility to the car, or does the responsibility lie solely with the car designer or manufacturer? There is a tendency among scientists and engineers to emphasize the imperfection of human beings and argue that computers could be the "moral saints" we humans can never be because they are not prone to human emotions with their explicit and implicit biases. By reviewing examples from loan approval practices, the chapter shows why this is incorrect. The chapter reviews the ethics of artificial intelligence (AI), specifically focusing on the problems of agency and bias. It further discusses meaningful human control in autonomous technologies as a powerful way of looking at human–machine interactions from the perspective of active responsibilities.
Algorithmic decision tools (ADTs) are being introduced into public sector organizations to support more accurate and consistent decision-making. Whether they succeed turns, in large part, on how administrators use these tools. This is one of the first empirical studies to explore how ADTs are being used by Street Level Bureaucrats (SLBs). The author develops an original conceptual framework and uses in-depth interviews to explore whether SLBs are ignoring ADTs (algorithm aversion); deferring to ADTs (automation bias); or using ADTs together with their own judgment (an approach the author calls “artificing”). Interviews reveal that artificing is the most common use-type, followed by aversion, while deference is rare. Five conditions appear to influence how practitioners use ADTs: (a) understanding of the tool (b) perception of human judgment (c) seeing value in the tool (d) being offered opportunities to modify the tool (e) alignment of tool with expectations.
The problem we address in this chapter is easy enough to state: Relatively simple algorithms, when duplicated many-fold and arrayed in parallel, produce systems capable of generating highly creative and nuanced solutions to real-world challenges. The catch is that the autonomy and architecture that make these systems so powerful also makes them difficult to control or even understand.
Automated systems that process vast amounts of data about individuals and communities have become a transformative force within contemporary societies and institutions. Governments and businesses, which adopt and develop new techniques of collecting and analyzing information, rely on algorithms in the decision-making process in various sectors: like banking, political marketing, health, and criminal justice. One of the early adopters of the automated systems are also welfare agencies responsible for the distribution of welfare benefits and management of social policies. These new ways of using technology highlight efficiency, standardization, and resource optimization as benefits. However, the debate about artificial intelligence (AI) and algorithms should not be limited to questions about its technical capabilities and functionalities. So too is the creation and implementation of technological innovations a significant normative and ethical challenge for our society. The decision to process data and use certain algorithms is structured and motivated by specific political and economic factors. Therefore, just as argued by Winner, technical artifacts pose political qualities and are far from being neutral.
To many people, there is a boundary which exists between artificial intelligence (AI), sometimes referred to as an intelligent software agent, and the system which is controlled through AI primarily by the use of algorithms. One example of this dichotomy is robots which have a physical form, but whose behavior is highly dependent on the “AI algorithms” which direct its actions. More specifically, we can think of a software agent as an entity which is directed by algorithms that perform many intellectual activities currently done by humans. The software agent can exist in a virtual world (for example, a bot) or can be embedded in the software controlling a machine (for example, a robot). For many current robots controlled by algorithms, they represent semi-intelligent hardware that repetitively perform tasks in physical environments. This observation is based on the fact that most robotic applications for industrial use since the middle of the last century have been driven by algorithms that support repetitive machine motions. In many cases, industrial robots which typically work in closed environments, say, for example, factory floors, do not need “advanced” techniques of AI to function because they perform daily routines with algorithms directing the repetitive motions of their end effectors. However, lately, there is an emerging technological trend which has resulted from the combination of AI and robots, which, by using sophisticated algorithms, allows robots to adapt complex work styles and to function socially in open environments. We may call these merged technological products “embodied AI,” or in a more general sense, “embodied algorithms.”
In writing this book, my overall objective was to provide an overview of the regulation of derivatives while exploring critical legal and regulatory issues associated with the ever-increasing use of algo bots and related AI systems in these markets. While discussion and analysis of HFT firms, virtual currencies, and algorithmic market manipulation might be more fashionable topics at the moment, basic information about the existing laws and regulations governing the markets for derivatives is necessary context for understanding the impact that technological changes are having on the markets for futures and other derivatives. That is why I included chapters covering topics that described the overall regulatory framework for derivatives.
This is an overview article regarding artificial intelligence (AI) and its potential normative implications. Technology has always had inherent normative consequences not least due to AI and the use of algorithms. There is a crucial difference between algorithms in a technical sense and from a social-science perspective. It is a question of different orders of normativity—the first related to the algorithm as a technical instruction and the second to the consequences springing from the first order. I call these last-mentioned norms algo norms. These are embedded in the technology and determined by the design of the AI. The outcome is an empirical question. AI and algo norms are moving targets, which call for a novel scientific approach that relates to advanced practice. Law actualizes primarily for preventive reasons in relation to negative aspects of the new technology. No major regulatory scheme for AI exists. In the article, I point out some areas that raise the need for legal regulation. Finally, I comment on three main challenges for the digital development in relation to AI: (1) the energy costs; (2) the singularity point; and (3) the governance problems.
This chapter evaluates the applicability of the natural monopoly framework in digital platform markets. It starts by discussing the role of technological change in fostering concentrated market structures in digital industries and by evalauting the policy implications that should be derived from the economic literature on multisided platforms. On that basis, it then identifies the general conditions that may give rise to a natural monopoly platform.