We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure coreplatform@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This chapter aims to analyse the evolving technological and legal intersection between content and data in the algorithmic society. The shift from parallel track to overlapping layers of these two fields contributes to examining platform powers and understand the role of European digital constitutionalism. The first part examines the points of convergence and divergence between the legal regimes introduced by the e-Commerce Directive and the Data Protection Directive. In the second part, two examples shows how the relationship between the two systems has evolved, looking in particular at how technological convergence has led to overlapping layers between the the fields of content and data which were conceived on parallel tracks. The third part examines the role of European digital constitutionalism with a specific focus on three paths of legal convergence.
This chapter underlines how, in the field of data, European digital constitutionalism would not suggest introducing new safeguards but providing a teleological interpretation of the GDPR unveiling its constitutional dimension. The first part of this chapter focuses on the rise and consolidation of data protection in the European framework. The second part addresses the rise of the big data environment and the constitutional challenges introduced by automated decision-making technologies. The third part focuses on the GDPR underlining the opportunities and challenges of European data protection law concerning artificial intelligence. This part aims to highlight to what extent the system of the GDPR can ensure the protection of the right to privacy and data protection in relation to artificial intelligence technologies. The fourth part underlines the constitutional values underpinning the GDPR to provide a constitutional interpretation of how European data protection law, as one of the more mature expression of European digital constitutionalism, can mitigate the rise of unaccountable powers in the algorithmic society.
This chapter highlights the reasons for the turning of freedoms to more extensive forms of private power by online platforms. Understanding the characteristics of platform power is critical to understand the remedies mitigating this constitutional challenge. This chapter analyses the two interrelated forms through which platforms exercise powers in the digital environment: delegated and autonomous powers. The first part of the chapter analyses the reasons for a governance shift from public to private actors in the digital environment. The second part examines delegated powers in the field of content and data while the third part focuses on the exercise of autonomous powers competing with public authority.
Algorithmic transparency is the basis of machine accountability and the cornerstone of policy frameworks that regulate the use of artificial intelligence techniques. The goal of algorithmic transparency is to ensure accuracy and fairness in decisions concerning individuals.AI techniques replicate bias, and as these techniques become more complex, bias becomes more difficult to detect. But the principle of algorithmic transparency remains important across a wide range of sectors. Credit determinations, employment assessments, educational tracking, as well as decisions about government benefits, border crossings, communications surveillance and even inspections in sports stadiums increasingly rely on black box techniques that produce results that are unaccountable, opaque, and often unfair. Even the organizations that rely on these methods often do not fully understand their impact or their weaknesses.
Human behaviour is increasingly governed by automated decisional systems based on machine learning (ML) and ‘Big Data’. While these systems promise a range of benefits, they also throw up a congeries of challenges, not least for our ability as humans to understand their logic and ramifications. This chapter maps the basic mechanics of such systems, the concerns they raise, and the degree to which these concerns may be remedied by data protection law, particularly those provisions of the EU General Data Protection Regulation that specifically target automated decision-making. Drawing upon the work of Ulrich Beck, the chapter employs the notion of ‘cognitive sovereignty’ to provide an overarching conceptual framing of the subject matter. Cognitive sovereignty essentially denotes our moral and legal interest in being able to comprehend our environs and ourselves. Focus on this interest, the chapter argues, fills a blind spot in scholarship and policy discourse on ML-enhanced decisional systems, and is vital for grounding claims for greater explicability of machine processes.
This book is about rights and powers in the digital age. It is an attempt to reframe the role of constitutional democracies in the algorithmic society. By focusing on the European constitutional framework as a lodestar, this book examines the rise and consolidation of digital constitutionalism as a reaction to digital capitalism. The primary goal is to examine how European digital constitutionalism can protect fundamental rights and democratic values against the charm of digital liberalism and the challenges raised by platform powers. Firstly, this book investigates the reasons leading to the development of digital constitutionalism in Europe. Secondly, it provides a normative framework analysing to what extent European constitutionalism provides an architecture to protect rights and limit the exercise of unaccountable powers in the algorithmic society. This title is also available as open access on Cambridge Core.
The Agreement Establishing the African Continental Free Trade Area (AEAfCFTA) is a revolutionary treaty of the African Union (AU) which creates an African single market to guarantee the free movement of persons, capital, goods and services. The AEAfCFTA is geared towards enabling seamless trade among African countries. The single market relies heavily on the processing of the personal data of persons resident within and outside the AU, thereby necessitating an effective data protection regime. However, the data protection regime across Africa is fragmented, with each country either having a distinct data protection framework or none at all. This lack of a uniform continental framework threatens to clog the wheels of the African Continental Free Trade Area (AfCFTA), because by demanding compliance with the various data protection laws across Africa, free trade will be inhibited, the very problem the AEAfCFTA seeks to remediate. These concerns are considered and applicable solutions are proposed to ensure the successful implementation of the AfCFTA.
An increasing number of EU citizens uses self-monitoring mHealth apps. The extensive processing of health data by these apps poses severe risks to users’ personal autonomy. These risks are further compounded by the lack of specific EU regulation of mHealth and the inapplicability of the EU legal framework on health and patients’ rights, including the Medical Devices Regulation. While the General Data Protection Regulation provides a solid legal framework for the protection of health data, in practice, many mHealth apps do not comply. This chapter examines the feasibility of self-regulation by app stores as a complementary form of regulation in order to improve the level of protection of EU mHealth app users. App stores already play an important role by top-down regulating third-party mHealth apps distributed on their platforms by means of app review procedures. In order to assess the effectiveness of these existing practices, a case study analysis is performed on the regulatory practices of Apple’s App Store and Google’s Google Play. This analysis is used to provide recommendations on how to strengthen current self-regulation initiatives by app stores in the context of health data protection.
As humanitarian organizations become more active in the digital domain and reliant upon new technologies, they evolve from simple bystanders to full-fledged stakeholders in cyberspace, able to build on the advantages of new technologies but also vulnerable to adverse cyber operations that can impact their capacity to protect and assist people affected by armed conflict or other situations of violence. The recent hack of the International Red Cross and Red Crescent Movement's Restoring Family Links network tools, potentially exposing the personal data of half a million vulnerable individuals to unauthorized access by unknown hackers, is a stark reminder that this is not just a theoretical risk but a very real one.1
The 2020 cyber operation affecting SolarWinds, a major US information technology company, demonstrated the chaos that a hack can cause by targeting digital supply chain components. What does the hack mean for the humanitarian cyberspace, and what can we learn from it? In this article, Massimo Marelli, Head of the International Committee of the Red Cross's Data Protection Office, draws out some possible lessons and considers the way forward by drawing on the notion of “digital sovereignty”.
This article examines the extent to which international law protects international organizations (IOs) from hacking operations committed by States. First, it analyzes whether hacking operations undertaken by member States and host States breach the privileges and immunities granted to IOs by their constitutive treaties, headquarters agreements, and conventions on privileges and immunities concerning the inviolability of their premises, property, assets, archives, documents and correspondence. The article also explores the question of whether hacking operations carried out by non-member States breach these provisions on the basis that they have passed into customary international law or because they attach to the international legal personality of IOs. Second, the article considers the question of whether hacking operations breach the principle of good faith. In this regard, it discusses the applicability of the principle of good faith to the relations between IOs, member States, host States and non-member States, and then considers how hacking operations impinge on a number of postulates emanating from good faith such as the pacta sunt servanda rule, the duty to respect the legal personality of IOs, the duties of loyalty, due regard and cooperation, and the duty not to abuse rights. Finally, the article examines the question of whether the principle of State sovereignty offers IOs indirect protection insofar as hacking can breach the sovereignty of the host State or the sovereignty of the State on whose cyber infrastructure the targeted data is resident.
It is well known that the financial technology (Fintech) industry has great potential not only to transform the financial system, but also to build an equitable and sustainable society. In effect, if this technology is applied in the right way, it could be used to overcome the social and economic gaps that exist worldwide.
Justification:
However, until now, the specific legal regimes (RegTech) that have been established for Fintech have, in addition to the general lack of confidence in new technologies, made its implementation more difficult. Nevertheless, in order to consolidate Fintech, it is necessary to design suitable regulation to transform these new technologies into ordinary instruments of our financial system.
Objective:
Therefore, in order to promote an appropriate RegTech that allows for the progress of Fintech, it is necessary to analyse the legal problems that restrict their expansion by using an analytical methodology and a bibliographic compilation of legal resolutions.
Main conclusion:
Legal personal data protection is the main obstacle that must be overcome by paying attention to the guarantees inherent to this fundamental right. In this way, if the legal system is to be ready for the Digital Revolution, society must not be worried about either the loss of rights or increases in inequalities.
In our data-driven society, personal data affecting individuals as data subjects are increasingly being collected and processed by sizeable and international companies. While data protection laws and privacy technologies attempt to limit the impact of data breaches and privacy scandals, they rely on individuals having a detailed understanding of the available recourse, resulting in the responsibilization of data protection. Existing data stewardship frameworks incorporate data-protection-by-design principles but may not include data subjects in the data protection process itself, relying on supplementary legal doctrines to better enforce data protection regulations. To better protect individual autonomy over personal data, this paper proposes a data protection-focused data commons to encourage co-creation of data protection solutions and rebalance power between data subjects and data controllers. We conduct interviews with commons experts to identify the institutional barriers to creating a commons and challenges of incorporating data protection principles into a commons, encouraging participatory innovation in data governance. We find that working with stakeholders of different backgrounds can support a commons’ implementation by openly recognizing data protection limitations in laws, technologies, and policies when applied independently. We propose requirements for deploying a data protection-focused data commons by applying our findings and data protection principles such as purpose limitation and exercising data subject rights to the Institutional Analysis and Development (IAD) framework. Finally, we map the IAD framework into a commons checklist for policy-makers to accommodate co-creation and participation for all stakeholders, balancing the data protection of data subjects with opportunities for seeking value from personal data.
In May 2012, a former research assistant contacted the Montréal police about an interview he had conducted with Luka Magnotta for the SSHRC-funded research project Sex Work and Intimacy: Escorts and their Clients four years previously. That call ultimately resulted in the Parent and Bruckert v R and Magnotta case. Now, a decade later, we are positioned to reflect on the collective lessons learned (and lost) from the case. In this paper, we provide a lay of the Canadian confidentiality landscape before teasing out ten lessons from Parent c R. To do so, we draw on personal archives, survey results from sixty researchers, twelve key informant interviews with qualitative sociolegal and criminology researchers, and documentary analysis of university research policies. The lessons, which range from the clichéd, to the practical, to the frustrating, have implications for the individual work of Canadian researchers and for the collective work of academic institutions.
Data localization hurts foreign investment and brings potential economic advantages to domestic corporations relative to foreign corporations. This leads to the argument that data localization violates the national treatment principle in international investment treaties. By applying the ‘three-step’ approach to assess the legality of data localization with respect to the national treatment principle, this article finds that the legality of data localization depends on certain circumstances, including the domestic catalogues of foreign investment, the definition of data localization in domestic legislation, and whether international investment treaties explicitly or implicitly incorporate data protection through exceptions for the protection of the state's essential security interests, public order, or public morals. China's acceleration of its legislation processes to regulate cross-border data transfer has significant implications for the negotiations and modifications of Chinese international investment treaties.
This chapter introduces the subject matter of the book, provides the core problem statement and defines the central terms used in the book. The introduction also explains the focus on governmental adoption of cloud computing services, legal sources, and the research approach.
The introduction explains how cloud computing has made it possible and desirable for users, such as businesses and governments, to migrate their data to be hosted on infrastructure managed by third parties. The chapter further outlines why aspects of migration to cloud services pose specific legal, contractual, and technical challenges for governments.
The chapter further outlines the challenge of addressing contracting and procurement requirements, data privacy and jurisdictional obligations when using an opaque, global, multi-tenant technology such as cloud computing.
This chapter evaluates the key data protection requirements and compliance obligations that governments must account for when entering into contracts with cloud service providers. The chapter concentrates on data protection issues that pose particular barriers for governments attempting to adopt cloud-computing services.
The chapter focuses primarily on understanding how the General Data Protection Regulation (GDPR) impacts the use of cloud computing. This requires an analysis of applicability and jurisdiction, applications of principles, understanding roles and responsibilities under the law, contractual obligations on sub-processors, liability for compliance, and limits on data transfers among others. The chapter also provides an overview of US data privacy law.
The chapter further evaluates recent case law and guidance from the European Data Protection Board (EDPB) and national data protection authorities to draw conclusions regarding GDPR cloud compliance obligations. Specifically, the chapter focuses on challenges and limits to cross-border transfers of data following the CJEU decision in the “Schrems II” case.
This chapter demonstrates the extent of the data protection problems in China, and the public’s growing concern about loss of privacy and abuse of their personal data. It proceeds to show that under China’s Cyber Security Law, the government has responded to this issue by strengthening ‘data protection’ from abuse by private companies but without shielding ‘data privacy’ from government intervention. In particular, enforced real-name user registration for online services potentially allows the Chinese government to demand access to the local data of any person who uses an online service in China, for national security or criminal investigation purposes. The chapter argues that this internal contradiction within the Cyber Security Law – increased data protection while demanding real-name user registration – may also benefit AI development. This is due, in part, to the vagueness of key terms within the Cyber Security Law, and the accompanying fuzzy logic within the Privacy Standards issued under that law, which allow both tech firms and government regulators considerable discretion in how they comply with and enforce data protection provisions. In the final part of the chapter, it is argued that due to the potential benefits of AI in solving serious governance problems, the Chinese government will only selectively enforce the data privacy provisions in the Cyber Security Law, seeking to prevent commercial abuse without hindering useful technological advances.
In Government Cloud Procurement, Kevin McGillivray explores the question of whether governments can adopt cloud computing services and still meet their legal requirements and other obligations to citizens. The book focuses on the interplay between the technical properties of cloud computing services and the complex legal requirements applicable to cloud adoption and use. The legal issues evaluated include data privacy law (GDPR and the US regime), jurisdictional issues, contracts, and transnational private law approaches to addressing legal requirements. McGillivray also addresses the unique position of governments when they outsource core aspects of their information and communications technology to cloud service providers. His analysis is supported by extensive research examining actual cloud contracts obtained through Freedom of Information Act requests. With the demand for cloud computing on the rise, this study fills a gap in legal literature and offers guidance to organizations considering cloud computing.
This article makes the counterintuitive argument that the millennia-old approach of Jewish law to regulating surveillance, protecting communications, and governing collection and use of information offers important frameworks for protecting privacy in an age of big data and pervasive surveillance. The modern approach to privacy has not succeeded. Notions of individual “rights to be let alone” and “informational self-determination” offer little defense against rampant data collection and aggregation. The substantive promise of a “fundamental human right” of privacy has largely been reduced to illusory procedural safeguards of “notice” and “consent”—manipulable protections by which individuals “agree” to privacy terms with little understanding of the bargain and little power to opt out. Judaism, on the other hand, views privacy as a societal obligation and employs categorical behavioral and architectural mandates that bind all of society's members. It limits waiver of these rules and rejects both technological capacity and the related notion of “expectations” as determinants of privacy's content. It assumes the absence of anonymity and does not depend on the confidentiality of information or behavior, whether knowledge is later used or shared, or whether the privacy subject can show concrete personal harm. When certain types of sensitive information are publicly known or cannot help but be visible, Jewish law still provides rules against their use. Jewish law offers a language that can guide policy debates. It suggests a move from individual control over information as the mechanism for shaping privacy's meaning and enforcement, to a regime of substantive obligations—personal and organizational—to protect privacy. It recognizes the interconnected nature of human interests and comprehends the totality of the harm that pervasive surveillance wreaks on individuals and social relations. It offers a conceptual basis for extending traditional privacy protections to online spaces and new data uses. And it provides a language of dignity that recognizes unequal bargaining power, rejects the aggregation and use of information to create confining personal narratives and judgments, and demands equal protection for all humans.