Skip to main content Accessibility help
×
Hostname: page-component-8448b6f56d-tj2md Total loading time: 0 Render date: 2024-04-24T15:03:55.976Z Has data issue: false hasContentIssue false

1 - The Promise and Peril of Human Rights Technology

Published online by Cambridge University Press:  19 April 2018

Molly K. Land
Affiliation:
University of Connecticut School of Law
Jay D. Aronson
Affiliation:
Carnegie Mellon University, Pennsylvania

Summary

Type
Chapter
Information
Publisher: Cambridge University Press
Print publication year: 2018
Creative Commons
Creative Common License - CCCreative Common License - BYCreative Common License - NCCreative Common License - ND
This content is Open Access and distributed under the terms of the Creative Commons Attribution licence CC-BY-NC-ND 4.0 https://creativecommons.org/cclicenses/

The first two decades of the twenty-first century have seen a simultaneous proliferation of new technological threats to and opportunities for international human rights. New advances – not only the Internet, social media, and artificial intelligence but also novel techniques for controlling reproduction or dealing with climate change – make clear that scientific and technological innovations bring both risks and benefits to human rights. Efforts to protect and promote human rights have to take seriously the ways in which these technologies, and the forms of knowledge creation, production, and dissemination they enable, can create harms and be exploited to violate rights. At the same time, human rights practitioners must continue to seek creative ways to make use of new technologies to improve the human condition. This dichotomy is the central tension that animates both this volume and the emerging field of human rights technology.

The overriding purpose of the volume, and of the University of Connecticut workshop that launched it, is to encourage human rights institutions, experts, and practitioners to take seriously the risks and opportunities of technology for the promotion and protection of human rights. The volume uses diverse case studies to examine how the dynamic of intertwined threat and opportunity plays out in a range of contexts. Case studies focus on assisted reproductive technologies, autonomous lethal weapons, climate change technology, the Internet and social media, and water meters. Considering the relationship between technology and human rights across these diverse areas reveals areas of both continuity and discontinuity in terms of how technology affects the enjoyment of human rights.

We begin by laying out the principles that animate the project. These principles have been derived chiefly from international human rights law and practice, and also draw on the scholarly study of science, technology, and the law. Based on these principles, we define a “human rights” approach to the study of technology. Finally, we identify and analyze the cross-cutting themes that unite the book – power and justice, accountability, and the role of private authority – to chart a road map for further study of the relationship between technology and human rights.

I Defining a “Human Rights” Approach to Technology

This collection goes beyond analyzing the risks and opportunities of technology to articulate a human rights-based approach to understanding the impact of technological change on human rights. A human rights-based approach to technology in this context is defined by two elements: a reliance on international human rights law as a source of normative commitments; and a focus on accountability strategies derived from human rights practice. In order to examine human rights law and practice as they intersect with technology, the book also makes use of ideas and concepts from cyberlaw and science and technology studies.

Although human rights is clearly not the only lens through which we can view technological change, it is an essential one. Understanding how human rights law and practice intersect with technology offers a global baseline for addressing the cross-border impacts of technology, and it also provides guidance for human rights advocates who are deploying new techniques in their work and responding to the impacts of new technologies.

A Human Rights Law

To say that technology presents both opportunities and challenges for human rights is not to suggest it is neutral. To the contrary, the book is motivated by the recognition that the design of technology reflects and influences societal values and norms.Footnote 1 Technology matters for human rights not only because it can be used in ways that have negative or positive consequences for the enjoyment of human rights, but also because its very design can make those consequences more or less likely. The well-known maxim “code is law” is shorthand for the idea that design can encourage or discourage particular activities by making them more or less costly, and thus promote particular outcomes.Footnote 2

The ability of technological design to steer outcomes necessarily means that decisions about design will reflect preexisting normative commitments. Those commitments can be derived from a variety of sources, including community values, constitutional precepts, or individual morals. This volume uses international human rights law to orient its discussion of technological design and implementation. This includes not only international human rights law but also a range of specific commitments that characterize human rights practice, including commitments to participation in decision-making and an emphasis on the needs of the most vulnerable. In this sense, the volume uses “human rights” in the specific rather than the general sense – not as a general proxy for “social good,” but rather as a set of internationally recognized legal norms and established practices. In fact, these norms and practices are increasingly characterized as a “human rights based-approach” in a variety of social justice contexts.Footnote 3

In reorienting discussions about human rights and technology on the core values of international human rights, the contributors to this volume disavow two tropes that generally dominate such analysis. The first is reductionist thinking about technology, which focuses on innovation and technology as silver bullets or even as goals in and of themselves, rather than as tools that embody both opportunities and risks. The second trope is reductionist thinking about human rights, which tends to reflect unrealistic assumptions about the effectiveness and functioning of international institutions or emphasizes legal accountability over other methods of responding to human rights violations.

We want to move the conversation away from these well-worn paths and reorient it on the fundamental values of a human rights-based approach, which emphasizes universality/inalienability, indivisibility, interdependence/interrelatedness, equality and nondiscrimination, participation/inclusion, and accountability/rule of law.Footnote 4 Each of these principles yields insights for understanding the contribution of a human rights-based approach to technology.

Equality and nondiscrimination require attending to the situation of the most vulnerable and demand that all people, regardless of their position in society, have access to the tools and knowledge needed to make their lives better. Accountability means that people must also have access to institutional spaces and mechanisms that allow them to make rights claims and to seek redress from accountable parties, whether governmental or non-state, when these tools and innovations have negative impacts on their lives or when they lack access to the benefits of these tools. Participation means that users and others affected by technological innovation must be meaningfully involved in, not just consulted on, the development and design of technology. Universality and inalienability require us to look beyond the ostensible neutrality of technology to recognize the power and privilege that are embedded in technological systems. Finally, indivisibility, interdependence, and interrelatedness mandate attention to the effects of technology not only on civil and political rights, such as freedom of expression and privacy, but also on rights to water, health, and education, among others. Focusing on these core values of the human rights-based approach can help cut through some of the deterministic thinking that technology engenders and provide the foundation for an approach to technology and innovation that centers on people, not things or institutions.

B Human Rights Practices

Technology can also play a central role in human rights accountability practices. Human rights practitioners have developed a set of accountability strategies over the past several decades that have emerged from the peculiarities of international human rights law. Human rights are protected by international treaties that create binding legal commitments for the states that ratify them. Almost always, though, these international treaties are paired with extremely weak enforcement mechanisms. The result is that human rights practitioners have had to rely on indirect compliance strategies, most notably “naming and shaming.”Footnote 5 While shame can be a component of domestic law enforcement as well,Footnote 6 it has over time become a primary strategy for holding states accountable for violations of international human rights law. This feature of human rights practice has important consequences when considering the effects of technology on the promotion and protection of rights. As we discuss below, although new technologies allow greater participation by ordinary citizens in accountability processes and offer new methods for preserving evidence, the use of technology by state actors also fragments state authority and thus makes accountability efforts more challenging.

In focusing on the effects of technology on compliance and enforcement, this volume also contributes to ongoing debates about the effectiveness of international human rights law. In the absence of a centralized authority to enforce rights, international human rights law and institutions may seem far more toothless than would be expected given the importance of the values they claim to protect.Footnote 7 Yet the power of human rights is located not in its coerciveness, but in its ability to serve as a vehicle for the assertion of political demands. Thus, the book envisions a multidimensional model of social change – with human rights operating bottom-up as well as top-down and in which a variety of actors engage both horizontally and vertically in an iterative process of incremental change.Footnote 8

The volume also examines how technology affects this process. Can choices about technological design strengthen efforts to protect rights by encoding human rights values directly into the structures in which communication, knowledge creation, and reproduction take place? Or will technological innovation disproportionately serve the interests of the powerful because of disparities in the knowledge and resources needed to use, deploy, and interrogate it critically? In some cases, might the use of technology slow down processes of social change by rendering invisible deliberate choices that have been made to restrict rights? As discussed at the outset of this introduction, technology is not an either/or proposition; the same technology may do all of these things and more. A goal of the contributions in this volume is to tease out when, where, and under what conditions technology can strengthen and protect rights.

C Cyberlaw and STS

The volume also adopts an interdisciplinary approach aimed at bringing international human rights law into conversation with two scholarly disciplines that examine the intersection of law and technology: cyberlaw and science and technology studies (STS). These disciplines share a common commitment to better understanding the technical, social, political, legal, and cultural dimensions of the development of new technologies and the new social arrangements they both accompany and foster.

STS, for example, recognizes that there is a certain amount of experimentation involved in the introduction of a new technology into society. STS scholars explore the intentions of those who deploy new technologies in society, examine the unexpected consequences of the introduction of new technologies, and analyze how societies respond to and shape these new tools, methods, and domains of knowledge.Footnote 9

This conversation promises to be generative and challenging for both human rights and STS. For example, the idea that the introduction of new technologies is an “experiment” seems at first to legitimize the idea of human experimentation, which is a violation of international human rights. On the other hand, the concept of experimentation provides a foundation for questioning the circumstances and effects of decisions associated with the introduction of a new technology, and for integrating greater human rights protections in the process. For example, as Lea Shaver notes in Chapter 2, vulnerable populations are often chosen as initial targets for the introduction of new technology when the impacts of the new technology are unknown, even when, and sometimes precisely when, negative outcomes are directly anticipated.Footnote 10 A human rights-based approach to technology informed by the insights of STS might recognize and accept that the introduction of new technology is inevitably experimental, but require that in the process, vulnerable populations be protected from the accompanying risks and share in the potential benefits.

Cyberlaw scholarship has also been a highly generative frame for thinking about how human rights law is affected by, and should respond to, new technological innovations. As legal scholar Lawrence Lessig notes, law and technology are two “modalities of regulation” that can serve to undermine, strengthen, narrow, expand, or displace one another by making regulation invisible, ensuring precision in the delivery of essential goods and services, or fragmenting decision-making.Footnote 11 Human rights law provides a unique case study for testing the regulatory effects of technology. While much of cyberlaw focuses on how technology regulates individual behavior, human rights is interested in how technology might constrain or enable regulation by the state. Although it is essential from a human rights perspective to understand how technology can be used by states to affect rights, we are equally concerned with the use of technology to promote human rights within domestic and international law.

II Cross-Cutting Themes

The chapters in this volume, which are described and analyzed in brief at the beginning of each section, highlight three common themes associated with interactions between human rights and technology: the relationship between technology and power, the effect of technological innovation on accountability, and the shifting boundary between public and private.

A Technology, Power, and Justice

One of the clearest and most important themes running through all of the contributions to this volume is the relationship between technology and power, and the effect of this relationship on the achievement of social justice and human rights. Although often heralded as a means to decentralize and destabilize power relationships, technology also reinforces and exacerbates inequality. Part of the value of combining a human rights approach with STS is to reveal the linkages between technology and power and examine how resources are distributed.

Technology is often seen as a means to shift power to the powerless. For example, mobile phones, social media, and the Internet can decrease the cost of communication, thereby making it more accessible to the public. In theory, this ought to shift power to ordinary individuals to participate in social, cultural, economic, and political life, and to take part in efforts to seek accountability for human rights violations. These shifts in power thereby destabilize and reconfigure the human rights domain. As Jay Aronson notes in Chapter 6, changes in how information is produced can alter the role and authority of human rights researchers and can give a voice to those affected by human rights violations.Footnote 12 Technology can also increase the delivery of essential services to remote areas, thus enabling the fulfillment of economic and social rights, or make possible choices about family formation that were not previously accessible to many.

At the same time, these shifts occur against the backdrop of unevenly distributed resources – and in many instances may exacerbate that unevenness. In Chapter 9, Ella McPherson illustrates how the deployment of these new techniques creates risk that not all human rights organizations are equally equipped to handle. Further, as John Emerson, Margaret Satterthwaite, and Anshul Vikram Pandey discuss in Chapter 8, technology may enable human rights defenders to convincingly articulate their demands for justice and restitution, but promoting this technology may be harmful if the technology does not come with the resources needed to enable organizations to collect, manage, and use information safely. The use of remote sensing, big data, data-visualization techniques, or even quantitative analysis may require knowledge and expertise outside the reach of most human rights advocacy organizations. Dalindyebo Shabalala (Chapter 3) illustrates global inequities in climate change technologies, which, like many emerging technologies, are often developed in well-resourced settings and only later diffused, if at all, to other parts of the world.Footnote 13 G. Alex Sinha (Chapter 12) similarly emphasizes the difficulty that even technologically savvy and well-resourced individuals face in protecting their own security and privacy online. For human rights defenders and organizations operating on a shoestring budget, with little aid directed to general operating expenses, it is nearly impossible.

Further, technological innovations may be fundamentally skewed toward inequality. STS literature, for example, has long emphasized that “conventional science and innovation policies increase inequalities, unless they are designed specifically to do otherwise.”Footnote 14 Systems of innovation that reward innovation through the market, for example, are structurally biased to produce goods that benefit those who are already well-off.Footnote 15 Fewer resources are invested in the development of technology that benefits poor individuals. Moreover, as Lea Shaver demonstrates in Chapter 2, when technology is deployed in poor areas, it can have the effect of limiting rights rather than protecting them. The water meters in Soweto, South Africa that anchor her analysis did not shift power to the poor, but rather consolidated control and authority in the state and the affiliated entity installing and running the meters. McPherson makes a similar point in her contribution. Although ICTs do enable human rights communication, the associated risk they engender means that they might only amplify the voices of the largest and most powerful organizations, leaving little room for the opening up of global audiences for smaller and less well-resourced groups.

Climate mitigation technologies also exacerbate global inequality by pitting the interests of the powerful against the less powerful.Footnote 16 Shabalala, in Chapter 3, examines the way in which those most affected by climate change and in need of mitigation technology are precisely those in the least position of power to bargain for that technology. There is an assumption that simply allowing information to be “free” will have positive social justice returns. In reality, who has access to this information is determined by existing power dynamics that depend on capital, investment, know-how, and intellectual property rights. Without changes to those underlying conditions, technology will at best make only marginally positive contributions to addressing inequalities at the national, regional, or global level. More likely, it will serve to reinforce those inequalities.Footnote 17

For technology to serve the interests of the less powerful, accessibility is not enough. Technologies do not work in a vacuum, but rather depend upon complex networks of expertise, maintenance, and governance that embody structural inequalities. Efforts to introduce new technologies to human rights problems must begin by asking a series of important questions about power, including who stands to benefit from any changes that the technology makes to the status quo – and to understand that these benefits are not equally distributed. It is essential to guard against the intentional bias built into technologies and their implementation, as well as unintentional negative consequences. We also need to think much more carefully about the state obligation not just to promote technological innovation and access to technology, but also the obligation to promote technological innovation in a way that supports rather than hinders the enjoyment of human rights.

B The Challenge of Accountability

This book also examines the impact of new technologies on efforts to promote accountability for human rights violations. As demonstrated in Part II, new technologies are often seen as having the capacity to revolutionize accountability efforts, providing opportunities for predicting, preventing, and mitigating atrocity crimesFootnote 18 as well as holding human rights abusers accountable for those violations.Footnote 19 The case for technology as a crucial new accountability tool has several dimensions:

  • The falling cost of documentation technologies means that ordinary individuals now often possess the tools they need to capture and share information about violations. Rather than having to rely on trained researchers, documentation opportunities now exist wherever there is someone with a smartphone and an Internet connection. Citizen video generated in this way has been instrumental in identifying human rights abuses in many recent cases, such as in Israel’s attacks in Gaza in 2014.Footnote 20

  • Many technologies offer opportunities to gather information in remote or even inaccessible areas. Mobile phones can be distributed to isolated communities, thus enabling them to gather and transmit information about violations.Footnote 21 Satellite images can be used to collect information about violations occurring in places off-limits to researchers.Footnote 22

  • Digitization may contribute to accountability efforts. Because digital evidence is easy to share and tends to be difficult to destroy once widely distributed or preserved, it may be harder for states to keep evidence of human rights violations from reaching the hands of interested constituencies.

  • Social networking technology supports the formation of groups, which can augment social movements designed to promote rights. Although scholars and advocates contest the existence and extent of the impact of technology on social mobilization, new technologies present at least the opportunity for mobilization around human rights documentation, advocacy, and capacity building.Footnote 23

To be clear, technology is not a silver bullet.Footnote 24 At the same time, it is a critical element in present and future efforts to hold human rights abusers accountable.Footnote 25

Behind the push to incorporate new technologies into human rights accountability efforts is an assumption about technology that is fundamentally incorrect: that because new information and communication technologies can be used to collect, analyze, and disseminate information, they are automatically biased toward greater disclosure and transparency. Technology, however, can be used just as easily to disguise, hinder, and obscure responsibility.

This is not intended as a tired recitation of the truism that technology is a tool that can be directed to both good and bad ends. Clearly, the use of technology by states and other duty bearers can undermine accountability efforts. And reliance on technology by human rights organizations can divert those organizations and their resources away from other activities that may have more of an impact on rights protection and promotion. But technology is not just used in good and bad ways; rather, it is in many contexts actually biased against disclosure and accountability. For example, the use of technology by states can obscure and fragment authority and thus disable the mechanisms that human rights advocates use to promote accountability. Because of the absence of coercive mechanisms for enforcing human rights, human rights advocates have traditionally relied on the deployment of shame – exposing and publicizing human rights abuses – to put pressure on violators to change their behavior. This methodology functions most effectively when those exposing the abuse are able to tell a story that points to a specific “violation” attributable to the actions or decisions of a specific “violator” and for which there is a potential remedy.Footnote 26 Simply publicizing harms without explaining who is at fault or how the harms can be remedied may not motivate either the public or those who have the power to respond.

It can be more difficult to use shame to challenge activities that impact rights when those activities are mediated by technology. Technology disables shaming as a modality of enforcement because it obscures agency.Footnote 27 Activities that are accomplished through automation, for example, appear to be the inevitable result of predetermined processes set in motion by an invisible hand – even when those processes are the product of decisions that reflect and embody value judgments. In Shaver’s chapter, for example, the introduction of water meters that deprived residents of adequate water was not immediately seen as a human rights violation because the act of cutting off water was done via automatic shutoff valve, not a human being. As Laura Dickinson (Chapter 5) illuminates, the automation of decision-making in armed conflict also creates an open question about who is actually making the decision that results in a violation, and thus who can and should be held accountable. Over time, technology may make rights-impacting decisions seem inevitable rather than the product of human agency, thus further complicating efforts to attribute these decisions to particular actors.

Technology also obscures agency because it interrupts the relationship between actor and effect. The chapters by Dickinson on drones, by Rikke Frank Jørgensen on the Internet (Chapter 11), and by Mark Latonero on big data (Chapter 7) demonstrate that these questions become particularly complicated and challenging when automation is involved. Who, if anyone, is responsible for a violation of humanitarian law when the targeting decision is made by an automated or semiautomated system? If a computer program systematically identifies individuals from a minority group as suspicious and those individuals are targeted by law enforcement, who is responsible for this discriminatory treatment?Footnote 28 Even the very complexity of new technologies can interrupt the relationship between actor and effect. Technological artifacts can require inputs from a variety of different actors along the supply chain, any of whom might have contributed to the harm. If a decision to use lethal force is made based on a range of inputs analyzed according to predetermined algorithms programmed by a team of engineers, who is responsible for any resulting violation of humanitarian law? Even if new fact-finding technologies can be used to bring to light previously hidden information, this may not be sufficient if the deployment of technology obscures and attenuates the relationship between these violations and those responsible for them.

More attenuation in the relationship between actor and violation may also undermine other mechanisms that exist for promoting compliance with human rights law. Human rights law relies not just on shame, but also on processes of acculturation to achieve respect for rights. Duty bearers comply with human rights norms not only for fear of sanction, but also because of social and cognitive pressures that encourage conformity.Footnote 29 Yet the mechanisms of acculturation may be less effective when a state deploys technology in ways that affect rights. Actors who have merely set the technology in motion may feel less responsible for any resulting violations because they were not themselves the proximate cause of the harm, and thus may be more immune to the pressures of acculturation. This is exacerbated when technology also introduces distance between actor and violation, such as drone strikes – the subject of Chapter 5, by Dickinson – that are now piloted by individuals located far outside the theater of war.

The use of technology by human rights actors may also undermine accountability by diverting attention from what is fundamentally a political struggle. New technologies do indeed facilitate the collection of previously inaccessible information, but the primary obstacle to seeking accountability for human rights violations is not in most instances a lack of information. As Jay Aronson points out in Chapter 6, efforts to seek accountability for human rights violations often fail because powerful actors – those who have the ability to exert pressure on human rights violators or hold them directly accountable – do not have the political will to act on the information that they receive. This is not a reason to stop collecting and preserving evidence of violations, of course. And perhaps more or better quality information, or information displayed in more powerful ways, might help nudge those actors toward action. Information, of course, is connected to and influences politics.

Our concern is that the current focus on technology and, by extension, on the collection and preservation of information, poses a much more fundamental risk – namely, the risk that those who fund and carry out human rights advocacy will focus their already limited energy and resources on developing technological rather than political solutions. This is in part a version of what Evgeny Morozov calls “technological solutionism” – “a fancy way of saying that for someone with a hammer, everything looks like a nail.”Footnote 30 We seize on technocratic solutions because we think those problems might actually be capable of being solved – and for human rights, where the enforcement mechanisms are quite weak but the harms so grave, a solvable problem is like a siren song. But pouring energy into problems that can be solved requires us then to emphasize information collection as the problem, so we can justify these choices. If we are to use new technologies in human rights accountability efforts, it is imperative that we resist the impulse to frame problems in terms of the available solutions and thus divert resources from addressing even more pressing challenges. Investing in new technologies to improve evidence collection is important, but we should not neglect the more traditional advocacy and grassroots mobilization strategies that are necessary to generate the political will required for social change.

This is not, however, just a case of technological solutionism. Focusing on technological responses also risks depoliticizing human rights debates and thus depriving human rights rhetoric of the source of its power. Human rights frames are powerful because they are moral claims backed up by legal obligations. Fundamentally, rights claims are about challenging existing power structures by vesting the ability to make political claims in those who are affected by political decisions. Technological responses to human rights problems risk transforming this discourse from one that is fundamentally about power to one about technocratic solutions. Others have explored the way in which integrating human rights risk assessment into the procedures and policies of institutions and businesses may transform human rights from a claim on the powerful to a box to be checked.Footnote 31 The question is whether the attempt to develop technological solutions to human rights violations might similarly depoliticize human rights advocacy.

C Technology and Private Authority

The final cross-cutting theme of the volume addresses how human rights law can and should respond to the growth in private authority that results from the introduction of new technologies. For historical, economic, and political reasons, new technological developments and innovations often involve significant roles for the private sector, albeit with considerable support and intervention from the state.Footnote 32 Intermediaries are extraordinarily powerful gatekeepers for information and communication; they exert control over our expressive activity, our associations with others, and our access to information.Footnote 33 Non-state actors play central roles in developing and implementing new technology outside of the information and communication technology sector as well – including in the fields of water technology, reproductive technologies, and autonomous weapons, among others.

As Jørgensen explores in Chapter 11, human rights law is at a disadvantage in responding to the impact that non-state actors can have on human rights because it creates few direct legal obligations for these actors. Under principles endorsed by the United Nations in the Guiding Principles on Business and Human Rights, non-state actors typically have only a moral – not a legal – obligation to respect human rights.Footnote 34 Human rights law attempts to address harms from non-state actors by imposing legal obligations on the state to protect individuals from such harm and also to provide remedies when rights have been violated.Footnote 35 Thus, in most instances, the activities of non-state actors do not constitute human rights violations unless the state has failed to protect, punish, and remedy the violation.

The challenge of responding to human rights harms by non-state actors is not a new issue,Footnote 36 but it has particular significance in the context of human rights and technology. Because technology is often owned and operated by private actors, its use shifts decision-making authority into the private sphere and outside of public mechanisms of accountability. This effect is compounded by the practice of outsourcing, which is prevalent not only in the context of automated weapons, as Dickinson discusses, but also in the provision of services, as noted by Shaver (Chapter 2), among other areas. Private actors are also increasingly at the forefront of efforts to respond to and remedy human rights violations by others, particularly in the case of information and communication technologies. It is unclear whether human rights law will be up to the task of responding to this multidimensional growth in private authority over human rights.

Perhaps most fundamentally, the essays in this volume also raise questions about what constitutes a human rights violation. When a state fails to control a security company that abuses individuals in a local community, this failure to protect is clearly a violation of the state’s international obligations. The abuses themselves are also clearly human rights harms. Can we apply the same reasoning when an Internet company systematically disadvantages a particular political viewpoint?Footnote 37 Or when that same company curates controversial videos, allowing some but removing others?Footnote 38 Does the state’s failure to ensure that encryption technologies are both available and easy to use breach its obligation to create an enabling environment for the fulfillment of rights?Footnote 39 A focus on the role of non-state actors in the design and implementation of technology that affects human rights brings into sharp focus not only the state action problem inherent in all of human rights, but also new questions arising from automation and algorithmic decision-making.

Contributions to this volume ultimately illuminate four important and related challenges that human rights law will need to address to effectively respond to human rights harms by private actors with respect to the introduction and use of new technologies. First, the contributions illustrate the importance of finding better solutions to regulating the conduct of non-state actors when their activities have impacts on human rights. Relying on the state to regulate these actors is often not effective, given the lack of political will, as well as state interest and even complicity in many of these rights violations. Self-regulation by companies is also unlikely to constrain abuses in the long run.

Second, the volume illustrates how important it is to understand the application of the Guiding Principles on Business and Human Rights in context. Industries vary widely in terms of how private companies affect rights and what changes need to occur to better protect rights. In the water meter case, private contractors were acting at the behest of public authority, but the local government initially did not intervene once it became clear that the meters were resulting in harms to rights. In the context of information and communication technologies, many of the relevant harms seem to emanate from an excess of public authority, such as state efforts to monitor private communications or remove particular content from the Internet. At the same time, the state is also failing to take the positive measures needed to create an enabling environment that allows individuals to protect their own privacy. More work needs to be done to understand the nature of the human rights harms, and the application of the Guiding Principles to those harms, in these very fact-specific contexts.

Third, the volume as a whole also points to the challenge of regulating the growing role of non-state actors in governance activities. Non-state actors are involved in a broad range of regulatory activities previously thought to be the exclusive province of the state. Business entities are engaged in cooperative relationships with the state to provide essential services, from water to health care. In the information and communication technology sector, non-state actors are increasingly engaged in regulating the speech of others – removing defamatory statements and other forms of problematic expression – often at the behest of the states in which they do business. Private companies now routinely decide whether evidence of human rights violations uploaded to a private platform will be publicly available. They also build and market technologies that both enable and prevent surveillance, cooperate – or refuse to cooperate – with government requests to monitor activists or political dissidents, and create algorithms and weapons systems that may determine whether an individual lives or dies. As Jørgensen’s contribution makes clear, these decisions are often motivated by commercial interests rather than concerns about the public good or human rights.

Clearly, human rights law has long sought to understand the nature of the public and private obligations associated with privatization of essential services.Footnote 40 Nonetheless, the current framework for addressing human rights harms inflicted by business entities is built on the distinction between public authority (the responsibility of the state to protect) and private authority (the duty of the company to respect). As a result, it applies less well to activities that blur this distinction. When non-state actors are providing essential services or engaging in speech regulation, it is not clear that they have, or should have, only a moral duty to respect rights. What are the duties of private actors operating in these grey areas between public and private?

The distinction between public and private is further muddied by the fact that many of these governance activities are done at the behest of, or under the compulsion of, governmental authority, such as when states compel Internet providers to police speech online. Indeed, a good case can be made that in such instances, the activity is not actually “private” and thus gives rise to direct state responsibility.Footnote 41 In other cases, the line between public and private may not be very clear as a factual matter. The exponential growth of public-private partnerships in a range of industries may, over time, render the Guiding Principles, with their clear division of public and private, less and less relevant.

Industry “self-regulation” will not likely provide an answer. The European Commission, for example, recently negotiated a “Code of Conduct” with Facebook, Microsoft, Twitter, and YouTube for the purpose of combatting illegal hate speech online. This is a pledge the EU extracted from dominant market players to regulate the expression of those who use their platforms, without any regulation or oversight.Footnote 42 Even if private regulation may in some instances be consistent with human rights, there are still few mechanisms of accountability that govern private actors engaged in this kind of regulation. Although state control is often deeply problematic, private control lacks even the trappings of accountability and transparency that usually accompany governmental regulation. For example, the Code of Conduct has been followed by regulation in Europe (such as Germany’s new social media law) imposing heavy fines against social media companies that fail to quickly remove harmful online content.Footnote 43 Although the German law also poses risks for freedom of expression online, it was at least enacted pursuant to transparent and democratic processes designed to consider the public interest. (The more troubling part of the law, of course, is that it delegates much of the actual policing of online content to private social media platforms.)Footnote 44

Finally, the contributions also illustrate that human rights law must focus on the particular duties of the non-state actors who build, design, and program technology. What are the obligations of programmers and software engineers? Who is responsible for errors? Do we need to think differently about human rights accountability in sectors heavily driven by technological innovation? Technology embodies values, and those who design the technology can make respect for human rights more or less costly, efficient, or easy to accomplish. Routers can be built to permit surveillance, or not. Do those who build technology have moral or legal obligations to respect rights? If so, how do they integrate this commitment into their work?

III Conclusion: The Role of Law

The contributions to this volume seek to raise awareness about the very real opportunities and costs of technology for the protection and promotion of human rights. Human rights actors seeking to deploy technology in pursuit of human rights must be aware of its strengths and weaknesses, and they must be prepared to have these very tools turned against them. Moreover, they should guard against the impulse to allow solutions to obscure problems or ignore the way in which technology might reinforce rather than dismantle power disparities, not only between individuals and the state, but also between large and small human rights organizations.

The range of topics covered in the book makes clear that an important goal of those interested in human rights technology must be to promote capacity. There is a need for greater technical expertise – indeed, even simply greater comfort in learning about and engaging with technological innovation – within the human rights community. Moreover, existing expertise is far from evenly distributed, and there must be significantly more attention paid to building the technological capacity of small human rights defenders around the world. Conversely, there is also a need among technology entrepreneurs and innovators for an understanding of what human rights law is and the opportunities and limits it presents. An important aim of this book is to help to help launch conversations between technologists and human rights practitioners, with the intention of promoting these critical linkages.

Human rights law itself also has an important role to play in maximizing the benefits and minimizing the risks of new technologies. At the very least, a human rights-based approach to technology should reorient decisions about technology to individuals and the impact these decisions have on their rights. For example, human rights law might be used to advocate against efforts to introduce new technologies using utilitarian rationales that neglect important sectors of society. The risks of new technologies should be assessed prior to their introduction. Decisions about technology should not just consider its overall benefits to a society or its impact on development, but also actively prioritize the needs of the most vulnerable members of that society. This applies equally well to the design of technology as to its introduction. Programmers and engineers must view the design of technology and the creation of technological standards as value-based decisions that need to support rather than hinder the enjoyment of international human rights.

This not to say that human rights law must, or should, hinder technological innovation. As this volume makes clear, new technologies also have important benefits for human rights. When technological innovation is oriented toward human rights enjoyment, it can serve as an important tool for the promotion of human rights around the world. To the extent that technological development results in limitations on rights in order to protect the rights of others or achieve important public policy objectives, such limitations should meet the tests of legality, necessity, and proportionality – the limits must be provided by law, be directed toward a legitimate purpose, and be narrowly tailored to achieve that purpose.Footnote 45

In the modern world, technology does much more than simply limit or protect rights. New technological developments are also putting pressure on the many fissures, ambiguities, and discontinuities that already exist within human rights law. Human rights law is generally focused on principles and not technologies, and there is no reason to expect that existing law will be unable to keep pace with technological change.Footnote 46 The introduction of technology does, however, illuminate situations in which existing human rights law is not sufficient, or sufficiently developed, to protect rights. For example:

  • Some areas of human rights law may rely implicitly on slippage in enforcement of domestic law to ensure that a right is adequately met. Perfect enforcement enabled by technologyFootnote 47 – such as the introduction of water meters that prevent households from taking more than their allotted amount of water – can expose the inadequacies of existing standards.

  • New technological developments may also create opportunities for violations that did not previously exist. Prior to digitization and the Internet, individuals might have generally relied on the fact that information disclosed about them in one context would be unlikely to find its way to another, or that in most cases, information, once disclosed, would eventually fade from public scrutiny. Today, however, information is perpetually available and infinitely sharable. Information about us that is disclosed in one context can now follow us foreverFootnote 48 or be combined with other data and used in ways we could not have foreseen.Footnote 49 What should the international human right to privacy mean in the digital world, and how can we reconcile an expansion of this right with the right to free expression?Footnote 50

  • Technology may also reveal ambiguities in our understanding of particular terms that had previously seemed natural and unproblematic. As Thérèse Murphy emphasized in the workshop organized around this volume, the meaning of the term “parent,” which previously seemed to have a fixed reference and definition, has changed in light of new reproductive technologies. Although “parent” has long had multiple dimensions, ambiguities in the term did not have practical consequences until advances in reproductive technology made possible new familial formations and roles.

  • Technological developments can also precipitate changes in the law itself. As Dickinson notes in Chapter 5, the combination of increasing automation and the use of contractors has reduced the likelihood of US casualties in foreign interventions to the point that new legal arguments can be made supporting an expanded executive role governing use of force in US law. At the same time, automation and the increased use of private contractors may be undermining the ability of international humanitarian law to provide a basis for international accountability.

Finally, human rights law must also grapple with the fact that technological innovations seem to be putting pressure on areas in which human rights law is weakest. One such area includes the positive state duty to fulfill rights. States are obligated under human rights law to respect, protect, and fulfill rights. What does this obligation to fulfill look like with respect to technological innovation? What does it mean when individuals must ensure their own digital security but lack access to appropriate expertise and affordable, easy-to-use tools for doing so? Technology is also challenging human rights law in the area of non-state actors. Should human rights law regulate the companies that create and build technologies and, if so, how? What obligations might human rights law impose on companies that not only themselves affect rights, but also serve as the gatekeepers for expressive activity that violates the rights of others?

As Laura Dickinson noted in the workshop, understanding the relationship between human rights law and technology may ultimately require a pluralistic approach. Technology constrains and influences behavior, of both individuals and states, in a variety of ways. Human rights law can no more control these effects than it can dictate the course of economic activity. The focus of a study on the intersection of human rights and technology must instead be to understand how technology interacts with human rights law to produce particular results – both the ways in which technology provides opportunities and risks for human rights enjoyment, and how the norms and practices of human rights advocacy are affected by new technological developments. The aim of this book is to begin that conversation.

Footnotes

1 See generally L. DeNardis, Protocol Politics: The Globalization of Internet Governance (Cambridge, MA: MIT Press, 2009); M. Flanagan, D. C. Howe, and H. Nissenbaum, “Embodying Values in Technology: Theory and Practice,” in Jeroen van den Hoven and John Weckert (eds.), Information Technology and Moral Philosophy (Cambridge: Cambridge University Press, 2008).

2 L. Lessig, Code and Other Laws of Cyberspace (New York: Basic Books, 1999), p. 6.

3 See A. E. Yamin, Power, Suffering and the Struggle for Dignity: Human Rights Frameworks for Health and Why They Matter (Philadelphia: University of Pennsylvania Press, 2016), p. 5.

4 HRBA Portal, “The Human Rights Based Approach to Development Cooperation: Towards a Common Understanding among UN Agencies,” http://hrbaportal.org/the-human-rights-based-approach-to-development-cooperation-towards-a-common-understanding-among-un-agencies.

5 See, e.g., A. Chayes and A. Handler Chayes, The New Sovereignty: Compliance with International Regulatory Agreements (Cambridge, MA: Harvard University Press, 1998); T. M. Franck, Fairness in International Law and Institutions (Oxford: Oxford University Press, 1995); R. Goodman and D. Jinks, “How to Influence States: Socialization and International Human Rights Law” (2004) 54 Duke Law Journal 621703; A. Guzman, “A Compliance-Based Theory of International Law” (2002) 90 California Law Review 1823–87; H. H. Koh, “Why Do Nations Obey International Law?” (1997) 106 Yale Law Journal 2599–659.

6 See L. C. Porter, “Trying Something Old: The Impact of Shame Sanctioning on Drunk Driving and Alcohol-Related Traffic Safety” (2013) 38 Law & Social Inquiry 863–91; S. Gopalan, “Shame Sanctions and Excessive CEO Pay” (2007) 32 Delaware Journal Corporate Law 757–97.

7 O. Hathaway, “Between Power and Principle: An Integrated Theory of International Law,” (2005) 72 University of Chicago Law Review 469521 at 490.

8 See, e.g., K. Sikkink, “Patterns of Dynamic Multi-Level Governance and the Insider-Outsider Coalition,” in D. della Porta and S. Tarrow (eds.), Transnational Protest and Global Activism (Oxford: Rowman & Littlefield Publishers, 2005), p. 156 (dynamic multilevel governance); Yamin, Power, Suffering, and the Struggle for Dignity, p. 64 (“rights constitute social practices that create spaces for vital social deliberation on how to arrange social institutions to meet population needs, especially of the most disadvantaged”).

9 W. E. Bijker et al. (eds.), The Social Construction of Technological Systems: New Directions in the Sociology and History of Technology, 2nd ed. (Cambridge, MA: MIT Press, 2012).

10 K. Sunder Rajan, Biocapital: The Constitution of Postgenomic Life (Durham, NC: Duke University Press, 2006); R. Rottenberg, “Social and Public Experiments and New Figurations of Science and Politics in Postcolonial Africa” (2009) 12 Postcolonial Studies 423–40.

11 L. Lessig, “The Law of the Horse: What Cyberlaw Might Teach” (1999) 113 Harvard Law Review 501–46 at 506.

12 See also M. Beutz Land, “Peer Producing Human Rights” (2009) 46(4) Alberta Law Review 1115–39 at 1116.

13 S. Cozzens and S. Thakur, “Problems and Concepts,” in S. Cozzens and S. Thakur (eds.), Innovation and Inequality: Emerging Technologies in an Unequal World (Cheltenham: Edward Elgar, 2014), p. 5.

14 Footnote Ibid., p. 8.

15 See A. Kapczynksi, “The Cost of Price: Why and How to Get Beyond Intellectual Property Internalism” (2012) 59 UCLA Law Review 9701026 at 978.

16 Communication technologies have long been critiqued as exacerbating rather than alleviating global power inequalities. See R. F. Jørgensen, Framing the Net: The Internet and Human Rights (Cheltenham: Edward Elgar, 2013), pp. 4344 (discussing the call for a New World Information and Communication Order that would enable countries of the Global South to participate more fully in global communication networks).

17 S. D. Gatchair, I. Bortagaray, and L. A. Pace, “Strong Champions, Strong Regulations: The Unexpected Boundaries of Genetically Modified Corn,” in Cozzens and Thakur, Innovation and Inequality: Emerging Technologies in an Unequal World, p. 116 (noting that Argentina has profited more from genetically modified crops because it has more large commercial farms than other developing countries, and thus can benefit more from the improvements that these products enable).

18 See, e.g., S. E. Kreps, “Social Networks and Technology in the Prevention of Crimes against Humanity,” in R. I. Rotberg (ed.), Mass Atrocity Crimes: Preventing Future Outrages (Washington, DC: World Peace Foundation, 2010), p. 175; C. Tuckwood, “The State of the Field: Technology for Atrocity Response” (2014) 8 Genocide Studies and Prevention: An International Journal 8186 at 81; C. Hargreaves and S. Hattotuwa, ICTs for the Prevention of Mass Atrocity Crimes (ICT for Peace Foundation, October 2010), http://ict4peace.org/wp-content/uploads/2010/11/ICTs-for-the-Prevention-of-Mass-Atrocity-Crimes1.pdf.

19 S. Livingston and G. Walter-Drop, “Conclusions,” in S. Livingston and G. Walter-Drop (eds.), Bits and Atoms: Information and Communication Technology in Areas of Limited Statehood (Oxford: Oxford University Press 2014), p. 169.

20 Amnesty International, “Launch of Innovative Digital Tool to Help Expose Patterns of Israeli Violations in Gaza,” July 8, 2015, www.amnesty.org/en/latest/news/2015/07/launch-of-innovative-digital-tool-gaza/.

21 M. K. Land et al., #ICT4HR: Information and Communication Technologies for Human Rights (Paris: World Bank, 2012), pp. 89.

22 See, e.g., American Association for the Advancement of Science, “High-Resolution Satellite Imagery and Housing Destruction in Ulu, Sudan,” www.aaas.org/page/high-resolution-satellite-imagery-and-housing-destruction-ulu-sudan.

23 See generally Z. Tufekci, Twitter and Tear Gas: The Power and Fragility of Networked Protest (New Haven, CT: Yale University Press, 2017).

24 See Tuckwood, “The State of the Field,” p. 82 (“Very few observers still believe that simply introducing an unspecified category of tools labeled ‘technology’ will be the panacea to defend human rights and save lives.”); see also Kreps, “Social Networks and Technology,” p. 175.

25 L. Diamond, “Liberation Technology,” in L. Diamond and M. F. Plattner (eds.), Liberation Technology: Social Media and the Struggle for Democracy (Baltimore: John Hopkins University Press, 2012), pp. 1012.

26 K. Roth, “Defending Economic, Social and Cultural Rights: Practical Issues Faced by an International Human Rights Organization” (2004) 26 Human Rights Quarterly 63–73 at 67–68.

27 Lessig, Code and Other Laws, pp. 96, 238.

28 On the topic of discrimination and algorithms, see generally Executive Office of the President, Big Data: A Report on Algorithmic Systems, Opportunity, and Civil Rights (May 2016), https://obamawhitehouse.archives.gov/sites/default/files/microsites/ostp/2016_0504_data_discrimination.pdf; S. Barocas and A. Selbst, “Big Data’s Disparate Impact” (2016) 104 California Law Review 671732; N. Diakopoulos, Algorithmic Accountability: On the Investigation of Black Boxes (Tow Center for Digital Journalism, December 3, 2014), http://towcenter.org/research/algorithmic-accountability-on-the-investigation-of-black-boxes-2/; Algorithmic Fairness, http://fairness.haverford.edu/; Centre for Internet and Human Rights, Ethics of Algorithms, https://cihr.eu/ethics-of-algorithms/.

29 See, e.g., Goodman and Jinks, “How to Influence States,” p. 626.

30 E. Morozov, To Save Everything, Click Here: The Folly of Technological Solutionism (New York: Public Affairs, 2013), p. 6.

31 G. A. Sarfaty, Values in Translation: Human Rights and the Culture of the World Bank (Stanford, CA: Stanford University Press, 2012), p. 134. Even the very process of “translating” human rights in ways that have local resonance raises this tension. S. Engle Merry, Human Rights & Gender Violence: Translating International Law into Local Justice (Chicago and London: University of Chicago Press, 2006), p. 5 (“Rights need to be presented in local cultural terms in order to be persuasive, but they must challenge existing relations of power in order to be effective.”).

32 See generally M. Mazzucato, The Entrepreneurial State: Debunking Public vs. Private Sector Myths (London: Anthem Press, 2014).

33 E. B. Laidlaw, Regulating Speech in Cyberspace, Gatekeepers, Human Rights and Corporate Responsibility (Cambridge: Cambridge University Press, 2015), pp. 46–56.

34 UN Office of the High Commissioner for Human Rights, Guiding Principles on Business and Human Rights: Implementing the United Nations “Protect, Respect and Remedy” Framework (New York and Geneva: United Nations, 2011), p. 13.

35 Footnote Ibid., pp. 3, 27.

36 See, e.g., A. Clapham, The Human Rights Obligations of Non-State Actors (Oxford: Oxford University Press, 2006).

37 M. Nunez, “Former Facebook Workers: We Routinely Suppressed Conservative News,” Gizmodo, May 9, 2016, http://gizmodo.com/former-facebook-workers-we-routinely-suppressed-conser-1775461006.

38 J. Concha, “Graphic Videos Spark Questions for Facebook, Journalism,” The Hill, July 10, 2016, http://thehill.com/homenews/287166-graphic-videos-spark-questions-for-facebook-journalism; “Facebook Decides Which Killings We’re Allowed to See,” Slashdot, July 7, 2016, https://tech.slashdot.org/story/16/07/07/1652224/facebook-decides-which-killings-were-allowed-to-see.

39 Emphasizing their importance for freedom of expression, the Special Rapporteur on Freedom of Expression has said that “States should promote strong encryption and anonymity.” Report of the Special Rapporteur on the Promotion and Protection of the Right to Freedom of Opinion and Expression, David Kaye, U.N. Doc. A/HRC/29/32 (May 22, 2015), ¶ 59.

40 See, e.g., K. De Feyter and F. Gómez Isa (eds.), Privatisation and Human Rights in the Age of Globalisation (Antwerp, Oxford: Intersentia, 2005).

41 M. Land, “Regulating Private Harms Online,” in R. F. Jørgensen (ed.), Private Actors and Human Rights Online (Cambridge: MIT Press, forthcoming).

42 European Commission (Press Release), “European Commission and IT Companies Announce Code of Conduct on Illegal Online Hate Speech” (May 31, 2016), http://europa.eu/rapid/press-release_IP-16–1937_en.htm.

43 See, e.g., N. Lomas, “Germany’s Social Media Hate Speech Law Is Now in Effect,” TechCrunch, Oct. 2, 2017, https://techcrunch.com/2017/10/02/germanys-social-media-hate-speech-law-is-now-in-effect/; J. Kastrenakes, “EU Says It’ll Pass Online Hate Speech Laws if Facebook, Google, and Others Don’t Crack Down,” The Verge, Sept. 28, 2017, www.theverge.com/2017/9/28/16380526/eu-hate-speech-laws-google-facebook-twitter.

44 Land, “Regulating Private Harms Online.”

45 Report of the Special Rapporteur on the Promotion and Protection of the Right to Freedom of Opinion and Expression, Frank La Rue, U.N. Doc. A/HRC/17/27 (May 16, 2011) ¶ 24.

46 M. K. Land, “Toward an International Law of the Internet” (2013) 54 Harvard International Law Journal 393458 at 408.

47 Lessig, Code and Other Laws, p. 6; J. Grimmelmann, Note, “Regulation by Software” (2005) 114 Yale Law Journal 1719–58 at 1723–24.

48 Google Spain SL v. AEPD, Case C-131/12, 2014 EUR-Lex 62012CJ0131 (May 13, 2014), ¶ 92 (interpreting the European Union Data Protection Direction to require search engines to delist search results if they are “inaccurate, irrelevant or excessive”).

49 D. G. Johnson, P. M. Regan, and K. Wayland, “Campaign Disclosure, Privacy and Transparency” (2011) 19 William & Mary Bill of Rights Journal 959–82 at 969 (describing bouncing, shading, and highlighting).

50 As Sinha notes in this volume, US law takes a fairly bright-line approach to this question, ostensibly removing protection from any information that has been disclosed to a third party. On the reconciliation of privacy and free expression, see, e.g., A. Chander and U. P. , “Free Speech” (2015) Iowa Law Review 501–49 at 539–42.

Save book to Kindle

To save this book to your Kindle, first ensure coreplatform@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×