We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
As other chapters in this volume show, the EU remedies system is difficult to employ when it comes to EU fundamental right violations. When discussing (im)possibilities of procedural rules and how these encourage or discourage litigation, socio-legal scholars have referred to the concept of legal opportunity structures. In relation to this concept, the EU is a system with closed procedural legal opportunities: rules on directly accessing the CJEU severely limit the possibilities to pursue strategic litigation. At the same time, the EU has opened up legal opportunities as well, by bringing litigants a new catalogue of rights to invoke. In the context of fundamental rights accountability, strategic litigation is used extensively. This begs the question: how are actors (NGOs, lawyers, individuals) making use of the (partially) closed EU system and what lessons can be drawn therefrom? This chapter delves into several cases of mobilisation of the EU remedies system and describes the way in which the actors involved worked with or around EU legal opportunity structures, both inside and outside the context of formal legal procedures. The lessons drawn from these actions can inform future action in this field.
This contribution examines the possibilities for individuals to access remedies against potential violations of their fundamental rights by EU actors, specifically EU agencies’ deployment of artificial intelligence (AI). Presenting the intricate landscape of the EU’s border surveillance, the chapter sheds light on the prominent role of Frontex in developing and managing AI systems, including automated risk assessments and drone-based aerial surveillance. These two examples are used to illustrate how the EU?s AI-powered conduct endangers fundamental rights protected under the EU Charter of Fundamental Rights. Risks emerge for privacy and data protection rights, non-discrimination, and other substantive rights, such as the right to asylum. In light of these concerns, the chapter then examines the possibilities to access remedies by first considering the impact of AI uses on the procedural rights to good administration and effective judicial protection, before clarifying the emerging remedial system under the AI Act in its interplay with the EU’s existing data protection framework. Lastly, the chapter sketches the evolving role of the European Data Protection Supervisor, pointing out the key areas demanding further clarifications in order to fill the remedial gaps.
This article examines the National Health Data Network (RNDS), the platform launched by the Ministry of Health in Brazil as the primary tool for its Digital Health Strategy 2020–2028, including innovation aspects. The analysis is made through two distinct frameworks: Right to health and personal data protection in Brazil. The first approach is rooted in the legal framework shaped by Brazil’s trajectory on health since 1988, marked by the formal acknowledgment of the Right to health and the establishment of the Unified Health System, Brazil’s universal access health system, encompassing public healthcare and public health actions. The second approach stems from the repercussions of the General Data Protection Law, enacted in 2018 and the inclusion of Right to personal data protection in Brazilian’s Constitution. This legislation, akin to the EU’s General Data Protection Regulations, addressed the gap in personal data protection in Brazil and established principles and rules for data processing. The article begins by explanting the two approaches, and then it provides a brief history of health informatics policies in Brazil, leading to the current Digital Health Strategy and the RNDS. Subsequently, it delves into an analysis of the RNDS through the lenses of the two aforementioned approaches. In the final discussion sections, the article attempts to extract lessons from the analyses, particularly in light of ongoing discussions such as the secondary use of data for innovation in the context of different interpretations about innovation policies.
Society needs to influence and mould our expectations so AI is used for the collective good. we should be reluctant to throw away hard (and recently) won consumer rights and values on the altar of technological developments.
By establishing a common data governance mechanism across the EU, the Regulation on the European Health Data Space (EHDS) aims to enhance the reuse of electronic health data for secondary use (e.g. public health, policy-making, scientific research) purposes and realise associated benefits. However, the EHDS requires health data holders to make available vast amount of personal and non-personal electronic health data, including electronic health data subject to intellectual property (IP) rights, for secondary use, which may pose risks for stakeholders (patients, healthcare providers and manufacturers alike). This paper highlights some conceptual legal problems which need to be addressed in order to provide clearer regulatory requirements to ensure effective and consistent implementation of key data minimisation measures (anonymisation or pseudonymisation) and data management safeguards (secure processing environments). The paper concludes that the EHDS has been drafted ambiguously (for example, its definition of “electronic health data” or the list of “minimum categories of electronic data for secondary use”), which could lead to inconsistent data management practices and may impair the rights and legitimate interests of data subjects and rights holders. To address legal uncertainties, prevent fragmentation and mitigate/eliminate risks, the EHDS requires closely coordinated implementation and legislative fine-tuning.
Non-fungible tokens (NFTs) introduce unique concerns related to the privacy of personal data. To create an NFT, users upload data to publicly accessible and searchable databases. This data can encompass information essential for the creation, transfer, and storage of the NFT, as well as personal details pertaining to the creator. Additionally, users might inadvertently engage with technology crafted to gather personal data. Traditional paradigms of privacy have not evolved in tandem with advancements in NFT and blockchain technology. To pinpoint where current privacy paradigms falter, this chapter delves into an introduction of NFTs, elucidating their foundational technical mechanisms and processes. Subsequently, the chapter juxtaposes current and historical privacy frameworks with NFTs, underscoring how these models may be either overly expansive or excessively restrictive for this emerging technology. This chapter suggests that Helen Nissenbaum’s concept of “contextual integrity” might offer the requisite flexibility to cater to the distinct attributes of NFTs. In conclusion, while there is a pronounced societal drive to safeguard citizen data and privacy, the overarching aim remains the enhancement of the collective good. Balancing this objective, governments should be afforded the latitude to equate society’s privacy interests with its imperative for transparency.
The world is witnessing an increase in cross-border data transfers and breaches orchestrated by State and non-State actors. Cross-border data transfers may lead to friction among States to localize or globalize data and to provide regulatory frameworks. “Data warfare” or information-war operations are often not covered under conventional rules; however, they are categorized as acts of espionage and subject to domestic regulations. As such, the operations are used to achieve a variety of objectives, including stealing sensitive information, spreading propaganda, and causing economic damage. Notable instances of the theft of sensitive information include the recent Bangladesh government website breach, exposing 50 million records, and the Unique Identification Authority of India (UIDAI) website hack.
Regulating the “data war” under the existing principles of international law may be unsuccessful in creating robust international legal frameworks to address the associated challenges. These developments further accentuate the global divide between data-rich regions in the Global North, with strong data protection mechanisms (such as the GDPR and the California Privacy Rights Act), and regions in the Global South, where there is a lack of comprehensive data protection laws and regulatory regimes. This disparity underscores the urgent need for global cooperation for substantial international regulatory mechanisms.
This article examines the complexities surrounding data warfare; it highlights the imperative need for establishing a robust global legal framework for data protection, delving into the concept of data war. It also acknowledges the growing influence of advanced technologies like data computing and mining and their ongoing threats to the fundamental rights of individuals associated with exposed personal data. The authors address the deficiencies in international legal provisions and advocate for a global regulatory approach to data protection as a critical means of safeguarding personal freedoms and countering the escalating threats in the digital age.
Global digital integration is desirable and perhaps even inevitable for most States. However, there is currently no systematic framework or narrative to drive such integration in trade agreements. This article evaluates whether community values can offer a normative foundation for rules governing digital trade. It uses the African Continental Free Trade Area (AfCFTA) Digital Trade Protocol as a case study and argues that identifying and solidifying the collective needs of the African region through this instrument will be key to shaping an inclusive and holistic regional framework. These arguments are substantiated by analysis of the regulation of cross-border data flows, privacy and cybersecurity.
Digital traces that people leave behind can be useful evidence in criminal courts. However, in many jurisdictions, the legal provisions setting the rules for the use of evidence in criminal courts were formulated long before these digital technologies existed, and there seems to be an increasing discrepancy between legal frameworks and actual practices. This chapter investigates this disconnect by analyzing the relevant legal frameworks in the EU for processing data in criminal courts, and comparing and contrasting these with actual court practices. The relevant legal frameworks are criminal and data protection law. Data protection law is mostly harmonized throughout the EU, but since criminal law is mostly national law, this chapter focuses on criminal law in the Netherlands. We conclude that existing legal frameworks do not appear to obstruct the collection of data for evidence, but that regulation on collection in criminal law and regulation on processing and analysis in data protection law are not integrated. We also characterize as remarkable the almost complete absence of regulation of automated data analysis – in contrast with the many rules for data collection.
The past decade has seen a marked shift in the regulatory landscape of UK higher education. Institutions are increasingly assuming responsibility for preventing campus sexual misconduct, and are responding to its occurrence through – amongst other things – codes of (mis)conduct, consent and/or active bystander training, and improved safety and security measures. They are also required to support victim-survivors in continuing with their education, and to implement fair and robust procedures through which complaints of sexual misconduct are investigated, with sanctions available that respond proportionately to the seriousness of the behaviour and its harms. This paper examines the challenges and prospects for the success of university disciplinary processes for sexual misconduct. It focuses in particular on how to balance the potentially conflicting rights to privacy held by reporting and responding parties within proceedings, while respecting parties’ rights to equality of access to education, protection from degrading treatment, due process, and the interests of the wider campus community. More specifically, we explore three key moments where private data is engaged: (1) in the fact and details of the complaint itself; (2) in information about the parties or circumstances of the complaint that arise during the process of an investigation and/or resultant university disciplinary process; and (3) in the retention and disclosure (to reporting parties or the university community) of information regarding the outcomes of, and sanctions applied as part of, a disciplinary process. We consider whether current data protection processes – and their interpretation – are compatible with trauma-informed practice and a wider commitment to safety, equality and dignity, and reflect on the ramifications for all parties where that balance between rights or interests is not struck.
Remote care technologies help patients connect with their caregivers through monitoring, alerts, anomaly detection, and so on. Due to their nature, remote care technologies cross a number of legal fields, such as privacy and data protection, cybersecurity, and medical devices regulation. This paper aims to close the gap between high-level legal principles and practical implementation by mapping the challenges in European Union (EU) law and combining them with initial results from the TeNDER project. Specifically, we focus on technologies, which create an alert system by combining data sources from electronic health records and connected devices. Using these solutions as a starting point, we analyze the obligations the EU law lays upon the stakeholders, that is, the designers or developers, and the users, who are patients and caregivers. We answer the following research question: Which challenges does EU law pose for designers and users of remote care solutions, and in what manner can those questions be addressed in practice? We then analyze and apply the principles of privacy and data protection (proportionality, lawfulness, and data quality), and cybersecurity notification duties, and discuss the possible classification as a medical device. For all three areas, we use the two-pronged approach from the project, that is, a big-picture description of the legal challenges posed by remote care technologies, and a detailed description of the legal obligations applicable to the developers as well as users (i.e., caregivers and patients). We will follow up our work in repeated impact assessments in order to determine the benefits and pitfalls of the current approach.
As facial recognition technology (FRT) becomes more widespread, and its productive processes, supply chains, and circulations more visible and better understood, privacy concepts become more difficult to consistently apply. This chapter argues that privacy and data protection law’s clunkiness in regulating facial recognition is a product of how privacy and data protection conceptualise the nature of online images. Whereas privacy and data protection embed a ‘representational’ understanding of images, the dynamic facial recognition ecosystem of image scraping, dataset production, and searchable image databases used to build FRT suggest that online images are better understood as ‘operational’. Online images do not simply present their referent for easy human consumption, but rather enable and participate in a sequence of automated operations and machine–machine communications that are foundational for the proliferation of biometric techniques. This chapter demonstrates how privacy law’s failure to accommodate this theorisation of images leads to confusion and diversity in the juridical treatment of facial recognition and the declining coherence of legal concepts.
There is a convergence on the protection of the traditional right to privacy and today’s right to data protection as evidenced by judicial rulings. However, there are still distinct differences among the jurisdictions based on how personal data is conceived (as a personality or proprietary right) and on the aims of the regulation. These have implications for how the use of AI will impact the laws of US and EU. Nevertheless, there are some regulatory convergences between US and EU law in terms of the realignment of traditional rights through data-driven technologies, the convergence between data protection safeguards and consumer law, and the dynamics of legal transplantation and reception in data protection and consumer law.
On 25 October 2021, Nigeria became the second country in the world, and the first in Africa, to launch a central bank digital currency. Launched with the tag line “Same Naira. More possibilities”, the Central Bank of Nigeria publicized the eNaira as having the capability to deepen financial inclusion, reduce the cost of financial transactions and support a more efficient payment system. However, more than one year after its launch, its usage is yet to gain a critical mass. This article identifies the significant challenges that make the eNaira unacceptable and potentially ineffective. First, its status as legal tender is questionable; secondly, it undermines privacy, a critical component of physical cash. Thirdly, it is incapable of wide acceptance by individuals and entities across Nigeria. The article explains each of these challenges and proposes a roadmap to the eNaira's acceptance and effectiveness.
This article explores the proposed amendments to the AI Act, which introduce the concept of “groups of persons”. The inclusion of this notion has the potential to broaden the traditional individual-centric approach in data protection. The analysis explores the context and the challenges posed by the rapid evolution of technology, with an emphasis on the role of artificial intelligence (AI) systems. It discusses both the potential benefits and challenges of recognising groups of people, including issues such as discrimination prevention, public trust and redress mechanisms. The analysis also identifies key challenges, including the lack of a clear definition for “group”, the difficulty in explaining AI architecture concerning groups and the need for well-defined redress mechanisms. The article also puts forward recommendations aimed at addressing these challenges in order to enhance the effectiveness and clarity of the proposed amendments.
Edited by
Rob Waller, NHS Lothian,Omer S. Moghraby, South London & Maudsley NHS Foundation Trust,Mark Lovell, Esk and Wear Valleys NHS Foundation Trust
Having ideas is easy for technology-enabled care, but implementing them is hard. The chances of success can be improved by building a broad coalition of essential stakeholders, persuading the sceptical, respecting your information governance colleagues, careful planning, and thinking about and minimising risks. Leadership, strategic thinking, persuasion, attention to detail and a team approach are needed. Professional project management, good marketing and communication are vital for implementation, and a robust evaluation is essential for sustainability.
The chapter explores the question of how different domestic and international law approaches to regulating the international transfer of personal data deal with cybersecurity threats. It examines the 2016 EU General Data Protection Regulation, the 2021 UK National Security and Investment Act, and the 2018 United States, Mexico and Canada Agreement, as representing distinct approaches for regulating international data transfers, namely data protection legislation, investment screening legislation, and digital trade agreements. The analysis demonstrates that a lack of uniformity in terms of what constitutes an adequate level and design of data protection mechanisms has left the issue of how to distinguish between acceptable and non-acceptable data-transfer restrictions largely unresolved.
Europe’s path to digitisation and datafication in finance has rested upon four apparently unrelated pillars: (1) extensive reporting requirements imposed after the Global Financial Crisis to control systemic risk and change financial sector behaviour; (2) strict data protection rules; (3) the facilitation of open banking to enhance competition in banking and particularly payments; and (4) a legislative framework for digital identification imposed to further the European Single Market. This chapter suggests that together these seemingly unrelated pillars have driven a transition to data-driven finance. The emerging ecosystem based on these pillars aims to promote a balance among a range of sometimes conflicting objectives, including systemic risk, data security and privacy, efficiency, and customer protection. Furthermore, we argue that Europe’s financial services and data protection regulatory reforms have unintentionally driven the use of regulatory technologies (RegTech), thereby laying the foundations for the digital transformation of European Union (‘EU’) financial services and financial regulation. The EU experiences provide insights for other countries in developing regulatory approaches to the intersection of data, finance, and technology.
The purpose of this contribution is to briefly present the content of the EU–US Data Privacy Framework recently adopted by the European Commission and then to assess whether it meets the expectations expressed by the Court of Justice of the European Union in its Schrems II judgment and related case law.
The contribution of antitrust de lege lata to the separation of powers is rather limited. A role should nevertheless be granted to specific legislation, regulation or practices and, possibly, antitrust de lege ferenda, especially with respect to the control of the digital infrastructure of democracy, the prohibition of distortions of the electoral and democratic process, the conclusion of certain governmental contracts with large or, a fortiori, dominant platforms, as well as the regulation and deconcentration or decentralization of artificial intelligence and the Metaverse. Artificial intelligence and, notably, machine learning based on data face a cycle of concentration accentuating the need for regulation. Deconcentration or decentralization may be envisaged, imposed or incentivized. Furthermore, limits are especially necessary when substantial autonomous powers are granted to forms of artificial intelligence. The Metaverse, for its part, may reshape society, politics, and economy around the globe in the future. From an antitrust perspective, the Metaverse poses at least two main issues: antitrust on the Metaverse and antitrust in the Metaverse.