Skip to main content Accessibility help
×
Hostname: page-component-76fb5796d-r6qrq Total loading time: 0 Render date: 2024-04-25T10:31:21.169Z Has data issue: false hasContentIssue false

Part III - Beyond Public/Private

States, Companies, and Citizens

Published online by Cambridge University Press:  19 April 2018

Molly K. Land
Affiliation:
University of Connecticut School of Law
Jay D. Aronson
Affiliation:
Carnegie Mellon University, Pennsylvania

Summary

Type
Chapter
Information
Publisher: Cambridge University Press
Print publication year: 2018
Creative Commons
Creative Common License - CCCreative Common License - BYCreative Common License - NCCreative Common License - ND
This content is Open Access and distributed under the terms of the Creative Commons Attribution licence CC-BY-NC-ND 4.0 https://creativecommons.org/cclicenses/

This final part considers the role of the actors – states, companies, and citizens – whose conduct has a bearing on the promotion and protection of rights. It breaks with the traditional binary frame of “public/private” to consider the impact that individual conduct has on the enjoyment of rights, as well as the obligations of states to support the capacity of individuals to protect their own rights. By considering not only states and non-state actors but also individuals, this part creates a more complete foundation for developing effective responses to violations of rights such as privacy and freedom of expression.

In Chapter 10, “Digital Communications and the Evolving Right to Privacy,” Lisl Brunner focuses first on the obligations of states. She considers the evolution of the right to privacy under international law. Brunner notes several areas where this law needs further development, including with respect to the flow of data across borders, as well as the ways in which current efforts to protect personal data are – and are not – responding to those gaps. In Chapter 11, Rikke Jørgensen then examines the responsibilities of non-state actors in respecting and protecting rights and the role of states in regulating them. She notes that core civil and political rights are being exercised in a commercial domain that is owned and operated by private actors, and that user expressions and personal information are the raw material that drives the business models of Internet and social media companies. Jørgensen notes the existence of a governance gap as applied to these companies, which are subject primarily to moral, not legal, obligations to respect rights. This gap is particularly troubling because current approaches to corporate social responsibility focus on how these companies respond to pressure from the state to violate rights but neglect the extent to which the enforcement of their own terms of service negatively affects privacy and freedom of expression.

Finally, in Chapter 12, G. Alex Sinha addresses the role of individuals in ensuring their own safety and security online. Sinha makes the case for greater attention to what human rights law has to say about waiver of privacy rights, since many of the actions we take in our everyday online communications can undermine our privacy. Drawing a bright line between public and private based on whether information has been shared with, or is visible to, another is increasingly out of sync with modern patterns of communication. The ease with which digitized information can be obtained, shared, and collated today exponentially increases the privacy impacts of even ostensibly “public” information.Footnote 1 Further, given the role of private individual conduct in the protection of rights such as privacy, states may have greater obligations than ever before to equip individuals to protect their own rights. Sinha’s research persuasively illustrates the challenges of protecting privacy online and the way in which these challenges may force us to choose between protecting our privacy and participating in democratic culture.

The contributions in this part provide two essential insights into the impact of states, corporations, and individuals in regulating the effects of technology on human rights. First, they illustrate many of the ways in which human rights law, particularly with its binary emphasis on states and non-state actors, needs to adapt to the technological changes that have taken place over the past few years. The chapters by Brunner (Chapter 10) and Sinha (Chapter 12), in particular, identify ways in which current approaches to privacy need further development in order to respond to issues created by new technologies. Data protection law, while important, may only protect limited aspects of what a right to privacy entails, and any human right to privacy must also answer the question of when these rights are waived.

Second, viewing the relationships among states, companies, and citizens reveals significant governance gaps in responding to the impacts of new technologies. An essential element of state duties is the duty to protect individuals from interference in enjoying their fundamental human rights by actors such as corporations. In the context of the Internet, states have not paid sufficient attention to this obligation. As Brunner (Chapter 10) and Jørgensen (Chapter 11) both discuss, states, particularly in Europe, have taken action to protect the right to privacy from infringement by companies. But, as Jørgensen observes, they have not acted as effectively to protect freedom of expression, particularly when private companies enforce their terms of service in ways that are detrimental to human rights.

There are, of course, many obstacles to effective state enforcement of the rights to privacy and to freedom of expression, including the state’s own desire to limit criticism levied against it or its core ideology. In such circumstances, one of the most effective responses we can advocate for is to require all states to ensure that their citizens can protect themselves; for example, by allowing access to the encryption and anonymity tools Sinha describes in Chapter 12. Such tools, along with the remedies that Jørgensen notes, are required by international law and may be a way to increase respect for freedom of expression and privacy even in the face of intransigent states and companies.

10 Digital Communications and the Evolving Right to Privacy

Lisl Brunner
I IntroductionFootnote 1

The meaning of the human right to privacy is evolving in response to developments in communications technology and an increasingly connected world in which data transits national boundaries imperceptibly. Although governments have had the capacity to access and store unprecedented quantities of digital communications data for some time, high-profile terrorist attacks and expanding transnational criminal activity have provided a strong motive to continue and expand these activities. When Edward Snowden revealed the global scope of existing communications surveillance capacity, states and civil society organizations turned to international law to seek clarity on how the right to privacy protects individuals, preserves legitimate state interests, and addresses the realities of the large-scale collection of data across traditional borders.

The tribunals and experts who interpret international human rights law have developed a rich body of standards on the right to privacy in communications, with European institutions leading the way. These standards address much of the present-day collection and use of digital communications, but significant gaps still exist. Until recently, there were few clear norms regarding the bulk collection of communications data, the responsibility of private companies to respect privacy rights, and the rules and protections that apply when communications data crosses borders.

This chapter explores the evolution of the right to privacy as it is established in international human rights law, and the ways in which human rights law is beginning to bridge these gaps. The first part provides an overview of the right to privacy and highlights developments in the digital age that international human rights law must urgently address. The second part outlines the scope and meaning of the right to privacy in communications as it appears in international human rights treaties and in interpretations of these treaties by international tribunals and experts. The chapter then examines how European institutions are interpreting data protection law in a way that seeks to bridge some of the gaps in privacy protection that have formed in international human rights law. The chapter concludes by describing the incipient steps that UN and European institutions are taking to address the privacy challenges presented by the seamless flow of data across borders.

II The Evolution of the Right to Privacy and Its Present Challenges
A The Protection of Privacy in Human Rights Law

The right to privacy has a broad scope. Scholars note that there is no universal conceptualization of privacy and that societies’ notions of its scope have evolved in response to changing political contexts and technological landscapes.Footnote 2 Privacy has often been linked to the interests of limiting access to the self and exercising control over one’s personal information and actions.Footnote 3 In its diverse characterizations, privacy has been closely linked to human dignity.

The right to privacy is protected in the International Covenant on Civil and Political Rights (ICCPR), which had 168 state parties as of November 2016. Article 17 provides the following:

  1. 1. No one shall be subjected to arbitrary or unlawful interference with his privacy, family, home or correspondence, nor to unlawful attacks upon his honour and reputation.

  2. 2. Everyone has the right to the protection of the law against such interference or attacks.Footnote 4

Article 12 of the Universal Declaration of Human Rights contains a nearly identical formulation,Footnote 5 and the right is also protected in the European Convention for the Protection of Human Rights and Fundamental Freedoms (European Convention),Footnote 6 the Charter of Fundamental Rights of the European Union,Footnote 7 the American Convention on Human Rights,Footnote 8 and the Arab Charter on Human Rights.Footnote 9

International and domestic tribunals have interpreted the right to privacy as protecting an individual’s capacity to decide with whom she has intimate relationships,Footnote 10 when to have a family and who forms part of it,Footnote 11 and even when to end her own life.Footnote 12 Privacy in one’s correspondence serves to limit the government’s power to monitor its subjects, and it protects a sphere in which individuals can develop and express ideas, exchange confidences, and build relationships. When surveillance of communications occurs or is perceived to occur, individuals are inhibited from seeking and disseminating ideas, and self-censorship results.Footnote 13 In light of its relation to all of these interests and other human rights, the right to privacy has been called “an essential condition for the free development of the personality.”Footnote 14

In human rights law, the state’s duty to respect and ensure rights entails negative and positive obligations. The state fulfills its negative obligation by not interfering with an individual’s right unless it acts in accordance with the law, in pursuit of a legitimate interest, and in a manner that is necessary and proportionate to the fulfillment of that interest.Footnote 15 The positive obligation encompasses “the duty of the States Parties to organize the governmental apparatus and, in general, all the structures through which public power is exercised, so that they are capable of juridically ensuring the free and full enjoyment of human rights.”Footnote 16 With respect to the right to privacy, the UN Human Rights CommitteeFootnote 17 has affirmed that states must establish privacy protections in law as part of their duty to ensure rights.Footnote 18

European institutions have led the way in interpreting the scope of the right to privacy in communications, and particularly in balancing it with the state’s interests in gathering information for law enforcement and national security purposes. In 1978, the European Court of Human Rights established that “[p]owers of secret surveillance of citizens, characterising as they do the police state, are tolerable under the Convention only in so far as strictly necessary for safeguarding the democratic institutions.”Footnote 19 European leadership in this area stems from the region’s experience during the Second World War, when census records facilitated the identification of the Jewish population and other groups targeted for persecution and extermination by Nazi and Nazi-influenced regimes.Footnote 20 Germany’s particularly staunch defense of the right to privacy is also linked to the widespread use of surveillance by the Stasi secret police in East Germany and the elaborate files in which it detailed individuals’ private lives.Footnote 21

The European approach initially contrasted with the more stringent approach of the UN Human Rights Committee, whose 1988 General Comment on the right to privacy indicated that “[s]urveillance, whether electronic or otherwise, interceptions of telephonic, telegraphic and other forms of communication, wire-tapping and recording of conversations should be prohibited.”Footnote 22 This pronouncement appears strikingly categorical and out of step with state practice. It has historically been regarded as a legitimate state interest to gather foreign intelligence in order to prevent, detect, and prosecute crime and threats to national security.Footnote 23

Over time, however, a more uniform set of global standards on the right to privacy in digital communications has formed, and other human rights institutions have looked to the European Court’s extensive case law to inform interpretations of this right. Until the beginning of this century, interpretations of the right to privacy in communications by the European Court and UN mechanisms generally focused on articulating guidelines for conducting targeted surveillance. But advances in technology, coupled with rising national security concerns, have facilitated and incentivized the amassing of large quantities of data by governments. Revelations by Edward Snowden and others have demonstrated the areas in which Western states fall short of meeting existing human rights standards, as well as the areas in which these standards are poorly developed or absent.

B The Impact of the Snowden Revelations on Privacy in the Digital Age

Beginning in June 2013, the Snowden disclosures gave the public a wealth of detail about the scope and nature of government surveillance of communications in the digital age, primarily focusing on intelligence programs in the United States and the United Kingdom. The documents describe how the US government collected call detail records of millions of individuals from telecommunications companies on an ongoing basis, performed queries on the records in order to identify potential suspects of terrorism and other international crimes, and used “contact-chaining” to review the records of individuals within three levels of communication of the initial suspect to identify other potential suspects.Footnote 24 Through the PRISM program, the US government compelled electronic communications service providers to provide the contents of online communications in response to requests that identified specific attributes of interest (i.e., “selectors”). Through the “upstream” method of surveillance, authorities gained access to the contents of telephone and Internet communications from the cables that transmit the communications internationally.Footnote 25

The Snowden documents suggested that the United Kingdom had obtained the contents of communications in bulk by tapping undersea cablesFootnote 26 and had intercepted and stored webcam images (including a large number of nude images) from nearly two million user accounts globally.Footnote 27 Agencies of both governments purportedly defeated encryption standards to access secure communications,Footnote 28 intercepted the communications of diplomatic missions and world leaders, including Angela Merkel and Dilma Rousseff,Footnote 29 and used listening stations in their foreign embassies to intercept communications traffic abroad.Footnote 30

Although the Snowden revelations largely focused on the United States, the United Kingdom, and their English-speaking partners in Canada, Australia, and New Zealand (the Five Eyes Alliance), information has also been published suggesting that large-scale surveillance programs exist in France,Footnote 31 Sweden,Footnote 32 Russia,Footnote 33 China,Footnote 34 Ethiopia,Footnote 35 and Colombia,Footnote 36 among other countries. Researchers and WikiLeaks have alleged that government authorities in the Middle East, Africa, and Latin America have obtained spyware that allows them to hack into communications devices remotely in order to monitor individuals.Footnote 37

The Snowden revelations had a more direct impact on international law than prior reports because they also signaled that US and UK surveillance programs targeted powerful allies. Germany, Brazil, and other states brought their grievances to the United Nations, and in December 2013, the General Assembly called on states “[t]o review their procedures, practices and legislation regarding the surveillance of communications, their interception and the collection of personal data, including mass surveillance, interception and collection, with a view to upholding the right to privacy by ensuring the full and effective implementation of all their obligations under international human rights law.”Footnote 38 The General Assembly requested the Office of the High Commissioner for Human Rights (OHCHR) to prepare a report on the right to privacy in the digital age, and the following year it encouraged the Human Rights Council to create a special mandate dedicated to the subject.Footnote 39 Joseph Cannataci was appointed as the first Special Rapporteur on the right to privacy in 2015, with a mandate to gather information and raise awareness regarding challenges facing the right to privacy, both generally and in the digital age.Footnote 40 Civil society organizations have also advocated for limitations on state surveillance at the international level, developing the Necessary and Proportionate Principles, which are based on the international human rights legal standards described below.Footnote 41

The US government responded to the Snowden revelations by terminating its bulk collection of telephony metadata under one legal authority and committing to greater transparency regarding its communications surveillance programs.Footnote 42 Seven months after the revelations, its signals intelligence policy was updated to establish principles circumscribing the collection and use of signals intelligence.Footnote 43 The policy directive recognized the “legitimate privacy interests” of all persons, and it required that intelligence gathering “include appropriate safeguards for the personal information of all individuals” regardless of their nationality or location. These steps represent progress, but debate about the proportionality of surveillance programs operated by US authorities continues.

On the opposite side of the Atlantic, the United Kingdom, France, and Switzerland have recently passed new laws expanding their surveillance powers.Footnote 44 The UK Investigatory Powers Act establishes broad powers for the government to engage in bulk collection of communications data, obtain data located overseas from companies with a UK presence, require the decryption of communications, and perform “bulk equipment interference.”Footnote 45 Some experts have praised the clarity of the bill and its oversight provisions; privacy experts and advocates have been highly critical of its sweeping powers.Footnote 46

The next section discusses the well-developed body of international human rights law that applies to the surveillance programs revealed by Edward Snowden. While these standards are not well defined in a few areas, such as bulk collection of data, the tribunals and experts that interpret them are moving to fill these gaps.

III Human Rights Law and Privacy in Digital Communications

The language of human rights treaties is general, and it falls to international tribunals, human rights mandate holders, expert bodies, and national courts to interpret the scope and meaning of a right. The European Court of Human Rights defines the obligations of the forty-seven contracting parties of the European Convention on Human Rights. Interpretations of the ICCPR, in turn, are generated by UN bodies including the International Court of Justice (ICJ), the Human Rights Committee, special mandate holders, and the Office of the High Commissioner for Human Rights (but only the decisions of the ICJ are legally binding on parties). The Court of Justice of the European Union has also begun to interpret the rights to privacy and data protection as contained in the EU Charter of Fundamental Rights. The Inter-American Commission and Inter-American Court of Human Rights interpret the American Convention on Human Rights. Consistent with the principle that human rights are universal, these entities draw on one another’s interpretations of rights and have thereby begun generating a fairly uniform body of international law on the right to privacy.

A Legality, Necessity, and Proportionality

Human rights law is implicated when a state interferes with the right to privacy, which occurs when the contents of communications or communications data are collected by state authorities, regardless of whether the data is examined.Footnote 47 Once authorities examine data that has been collected, a second interference takes place. Retaining data over time interferes with the right to privacy,Footnote 48 as does sharing communications data with other parties.Footnote 49 Restricting anonymity in digital communications is also considered to be an interference with the right to privacy, because anonymous and secure communications allow the free exchange of information and ideas, and anonymity “may be the only way in which many can explore basic aspects of identity, such as one’s gender, religion, ethnicity, national origin or sexuality.”Footnote 50

In order to be consistent with international human rights law, an interference with a qualified right such as privacy must meet the tests of legality, necessity, and proportionality.Footnote 51 In terms of legality, the action constituting the interference (such as interception of communications) must be previously established in a law that is publicly accessible, clear, and precise, meaning that its consequences are foreseeable.Footnote 52 An interference must be in pursuit of a legitimate aim, and it must be a necessary and proportionate means of achieving that aim. For the European Court of Human Rights, the measure must be “necessary in a democratic society,” meaning that it must answer a “pressing social need,” and state authorities must provide “relevant and sufficient” justifications for the measure.Footnote 53

The court has established that states have a margin of appreciation in determining whether a measure is necessary and proportionate, particularly when the protection of national security is concerned.Footnote 54 When a state engages in secret surveillance, the analysis focuses on whether the measures are “strictly necessary for safeguarding the democratic institutions” and whether “adequate and effective guarantees against abuse” are in place.Footnote 55 Because individual applicants can rarely prove that they have been the subject of such surveillance, the European Court has permitted challenges to intelligence laws in abstracto in certain circumstances, at times finding a violation of Article 8 where the legal framework did not meet the legality test,Footnote 56 and at other times looking at whether the law itself is necessary and proportionate.Footnote 57

For the European Court, laws containing a great degree of specificity are more likely to be deemed consistent with the European Convention. The law should specify the nature of the offenses for which surveillance can be ordered,Footnote 58 which individuals’ communications can be monitored,Footnote 59 and which authorities are empowered to request, order, and carry out surveillance, as well as the procedure to be followed.Footnote 60 It should provide for “a limit on the duration of telephone tapping; the procedure to be followed for examining, using and storing the data obtained; the precautions to be taken when communicating the data to other parties; and the circumstances in which recordings may or must be erased or destroyed.”Footnote 61 Laws that restrict the right to privacy “must not render the essence of the right meaningless and must be consistent with other human rights, including the prohibition of discrimination.”Footnote 62

The European Court of Human Rights has determined on two occasions that the German G-10 Act of 1968 satisfied the rigorous standards for legality that a communications surveillance law must meet.Footnote 63 It has also approved provisions of the UK Regulation of Investigatory Powers Act on the interception of domestic communications.Footnote 64 In contrast, the court has found that other laws in the United Kingdom, as well as in Russia, Switzerland, Bulgaria, Romania, and Hungary, lacked the necessary specificity and gave the authorities overly broad discretion to conduct communications surveillance.Footnote 65

B The Necessity and Proportionality of Bulk Collection

For years, human rights bodies have emphasized that although advances in communications technology require evolution in legal safeguards, the tests of legality, necessity, and proportionality continue to apply.Footnote 66 Yet many have questioned whether programs that collect or retain data from millions of individuals who are not implicated in criminal activity or terrorism can ever be necessary and proportionate means of protecting the state and its people. For several UN Special Rapporteurs, the answer is no.Footnote 67 The OHCHR, the European Court of Human Rights, and the Court of Justice of the European Union have taken a more measured approach. While they have condemned indiscriminate or generalized surveillance measures, they have indicated that the principles that apply to targeted interception of communications and large-scale collection are generally the same.Footnote 68

When analyzing bulk surveillance programs, the European Court employs a higher level of scrutiny, and it has found that programs that are clearly circumscribed by law and accompanied by robust oversight mechanisms can be consistent with the right to privacy.Footnote 69 In Weber and Saravia v. Germany, the court deemed “strategic monitoring” of communications to be consistent with the European Convention, because the law provided sufficient guarantees against abuses of state power.Footnote 70 The law permitted interception based on “catchwords” designed to identify communications linked to one or more of six specific crimes. The guarantees included clear rules governing every aspect of data collection and use, as well as oversight by the three branches of government and a civilian agency.Footnote 71

In contrast, bulk surveillance programs that do not clearly circumscribe state power in law and in practice have been deemed inconsistent with Article 8 of the Convention. The court has ruled that the indefinite retention of biometric data of persons who were suspected (but not convicted) of committing criminal offenses was not necessary in a democratic society.Footnote 72 In Liberty v. United Kingdom, the bulk interception of external communications pursuant to a 1985 law was deemed to violate Article 8 because it gave the executive unfettered discretion as to which of the intercepted communications could be examined.Footnote 73 In the 2015 case Zakharov v. Russia, the court found the government’s system of direct access to communications networks by state authorities (known as “SORM”) inconsistent with the European Convention. The court noted that interception could take place for a broad range of offenses (including pickpocketing), and that judges had limited powers to order and oversee interception.Footnote 74 Because interception orders were not presented to communications service providers, the court questioned whether judicial control existed in practice.Footnote 75

Most recently, in Szabo and Vissy v. Hungary, the court determined that broadly drafted laws and weak oversight of surveillance (primarily by political officials of the same agency that conducted the surveillance) rendered bulk interception of communications inconsistent with the Convention. Deeming “strategic, large-scale interception” for national security purposes to be “a matter of serious concern,” the court stated: “A measure of secret surveillance can be found as being in compliance with the Convention only if it is strictly necessary, as a general consideration, for the safeguarding [of] the democratic institutions and, moreover, if it is strictly necessary, as a particular consideration, for the obtaining of vital intelligence in an individual operation.”Footnote 76 “An individual operation” might be one with a specific targetFootnote 77; it might also be an effort to locate and apprehend a terrorist by collecting all communications in a certain area during a particular period. Both Weber and Saravia and the recent Tele2 Sverige judgment of the Court of Justice of the European Union support the latter position. The court will have more opportunities to determine whether bulk collection should be further circumscribed, as at least three cases challenging bulk surveillance programs in the United Kingdom are pending before it.Footnote 78

For their part, several UN human rights experts have concluded that the bulk surveillance of communications is inherently incompatible with the protection of Article 17 of the ICCPR. The former UN Special Rapporteur for counterterrorism and human rights, Martin Scheinin, has indicated that intelligence-gathering programs should be “case-specific interferences [with the right to privacy], on the basis of a warrant issued by a judge on showing of probable cause or reasonable grounds.”Footnote 79 The current Special Rapporteur, Ben Emmerson, and the Special Rapporteur on the right to privacy, Joseph Cannataci, have made similar determinations.Footnote 80 While Scheinin and others have emphasized the need for strong oversight mechanisms and strict regulations on the use of data that is collected, he and the other experts suggest that these safeguards are insufficient to make bulk surveillance consistent with the right to privacy.Footnote 81

It seems unlikely that the European Court will shift to the UN rapporteurs’ more categorical condemnation of bulk collection, especially as the Court of Justice of the European Union has recently reaffirmed the standards of its case law to date. The European Court’s position is logical: Communications surveillance is not prohibited by international law, and it is practiced by prominent European states. As a policy matter, however, it is problematic that human rights law should legitimize a practice that few states will conduct in a rights-respecting manner, and which leads to ever-increasing amounts of data being accessible to actors with a variety of motivations.

C Effective Oversight of Communications Surveillance

International human rights law generally provides that large-scale surveillance can be consistent with the right to privacy if it is accompanied by robust oversight mechanisms. Yet oversight of intelligence services and their covert operations has always proved challenging, even in societies where the rule of law is well established. Legislative committees conduct oversight of the intelligence services in the United States and the United Kingdom, but the Snowden revelations raised doubts as to whether these committees have access to the information necessary to perform their roles effectively.Footnote 82 In the United States, oversight of signals intelligence activities conducted by executive order is limited.Footnote 83 Additionally, while the US Foreign Intelligence Surveillance Court provides judicial authorization and oversight of several intelligence-gathering programs, for many years the confidential nature of its opinions obscured its surprisingly broad interpretation of a provision that permitted the collection of information “relevant to an authorized investigation.”Footnote 84 That court’s authority to examine the collection of foreign intelligence under the PRISM and upstream programs revealed by Snowden is also limited to assessing the government’s targeting and minimization procedures.Footnote 85

UN bodies and the European Court have recognized that ex ante authorization of communications surveillance by the judiciary provides a powerful safeguard against abuse,Footnote 86 but they have declined to deem it a requirement of adequate surveillance laws, given the often limited powers of the judiciary to access relevant information or to assess the necessity and proportionality of surveillance.Footnote 87 Instead, they recommend that oversight be performed by all branches of government, including executive inspectors general or supervisory bodies, as well as civilian agencies.Footnote 88 For these authorities, oversight mechanisms must have sufficient resources and access to pertinent information in order to serve as an effective check on the power of law enforcement or security agencies.Footnote 89 There must also be a measure of public scrutiny; for example, anyone should be able to bring a claim before an oversight body, and its periodic reports and decisions about individual complaints should be publicly accessible.Footnote 90

As the European Court recognized in Zakharov, communications service providers also have the potential to be a check on intelligence services and law enforcement agencies.Footnote 91 Communications service providers execute judicial orders for surveillance and can challenge those that are overly broad or illegal.Footnote 92 They can also increase transparency about how surveillance is conducted by disclosing the numbers of requests for interception and communications data that they receive.Footnote 93 Whistleblowers offer another potential check on the power of public authorities to conduct surveillance, and experts have emphasized the need for protections for those who act in good faith when disclosing information “to the media or the public at large if they are made as a last resort and pertain to matters of significant public concern.”Footnote 94

D Access to Effective Remedy

Closely linked to oversight is the requirement that states ensure access to an effective remedy for anyone who claims that her rights have been violated.Footnote 95 The remedy may be through a judicial or nonjudicial mechanism that has the capacity to bring about the investigation, prosecution, and sanction of those responsible for violations (if applicable) and to provide an adequate remedy for the victim.Footnote 96 Any mechanism should be independent and have access to the evidence necessary to determine claims before it.Footnote 97

The secret nature of communications surveillance can render access to justice more tenuous for those who claim a violation of their right to privacy. As a result, human rights tribunals and experts are increasingly recommending that authorities provide notice to targets of surveillance once the surveillance has ceased.Footnote 98 States, however, have generally resisted this practice as impractical or detrimental to surveillance operations and methods. If a state does not provide notice, it should have liberal rules on standing to bring claims that challenge covert surveillance regimes.Footnote 99 If an individual’s right to privacy is found to have been violated, adequate remedies may include a declaratory judgment, damages, and injunctive relief against the orders that permit data to be intercepted or retained. Publication of decisions determining the rights of complainants also contributes to transparency and constitutes part of such a remedy.Footnote 100

Although significant gaps between law and practice remain, a fairly comprehensive set of rules has emerged in the jurisprudence of the European Court of Human Rights. Surveillance programs are more likely to be consistent with international human rights law when they are strictly regulated by law, overseen by a number of independent and properly resourced bodies, capable of being challenged, and marked by the greatest degree of transparency possible. At the same time, human rights law itself has fallen short in two respects. First, its rules apply to states, rather than to the private actors who hold this personal data, and second, it has only recently begun to address the privacy protections that should apply to communications when they transit borders. The next section examines how European institutions seek to fill the first gap by interpreting EU data protection norms in light of the rights to privacy and data protection. The following section describes how both UN and European interpretations of the right to privacy are evolving to address the flow of digital communications across national borders.

IV Data Protection and the Right to Privacy

While human rights law sets out the obligations of states that are parties to human rights treaties, data protection laws and principles regulate practices of both state and private actors that can affect the right to privacy. The protection of personal information has historically been regarded as a component of the right to privacy,Footnote 101 yet with the adoption of the Charter of Fundamental Rights of the European Union in 2009, data protection became a distinct fundamental right in Europe.Footnote 102 UN Special Rapporteur Martin Scheinin has opined that a right to data protection is emerging at a global level as well.Footnote 103 While it is not recognized as such in human rights treaties outside of Europe, interpretations of data protection law that are closely tied to international human rights standards may convert this body of law into an effective tool for protecting rights at the domestic and international levels.

In terms of international law and guidelines, data protection principles are contained in the Council of Europe’s Data Protection Convention,Footnote 104 the OECD Privacy Framework,Footnote 105 and the Asia Pacific Economic Cooperation Privacy Framework.Footnote 106 They are reflected in the newly adopted EU General Data Protection Regulation, which applies in the 28 EU member states, and in the proposed EU Regulation on Privacy and Electronic Communications.Footnote 107 They include the principles that the collection and use of personal data – including communications data – should be in accordance with the law, subject to limitations, and strictly for the fulfillment of purposes that are clearly articulated to the data subject. Data should be deleted when it is no longer necessary for the purposes that justified collection. The entity collecting personal data should only disclose that data to other parties by the authority of the law or if the data subject has consented. Individuals should have notice about, and a measure of control over, the ways in which their data is collected, used, and shared, as well as ways to hold states and private actors accountable for violations.Footnote 108 These principles echo the international human rights standards laid out in the previous section, and they form the basis of strong domestic data protection laws in states such as Canada, Argentina, Israel, and Japan.Footnote 109

The Court of Justice of the European Union (CJEU) has interpreted EU data-protection law in light of the rights to privacy and data protection established in the EU Charter of Fundamental Rights, and its recent decisions have had sweeping impacts on public and private actors in Europe and beyond its borders. In 2014, the CJEU ruled that an EU law that allowed member states to mandate the storage of communications metadata for periods of between six months and two years was inconsistent with the rights to data protection and privacy.Footnote 110 According to the CJEU, telephony metadata “may allow very precise conclusions to be drawn concerning the private lives of the persons whose data has been retained.” It determined that the retention of data of persons who were not linked to crimes was problematic, and the legal framework lacked clear rules as to how authorities should access and use that data.Footnote 111

The CJEU reiterated its holding in Tele2 Sverige, indicating that “the general and indiscriminate retention of all traffic and location data” was not strictly necessary to achieve the aim of fighting serious crime and terrorism.Footnote 112 It added that member states’ laws could permit the targeted retention of metadata for the purpose of fighting serious crime; they could also permit the retention of data from one or more geographical areas where “objective evidence” demonstrates a clear link “to fighting serious crime or to preventing a serious risk to public security.”Footnote 113 These holdings are consistent with Weber and Saravia, S. and Marper, and other case law of the European Court of Human Rights,Footnote 114 but unlike the latter judgments, they could be implemented immediately by private actors, who were no longer subject to the retention mandate. As such, the judgments had the practical effect of limiting the amount of data accessible to state authorities for surveillance.

In the Google Spain case, the CJEU further demonstrated the capacity of data-protection law to regulate the privacy practices of non-state actors. The CJEU held that search engine providers must respond to requests from individuals to de-index their names from search results. Such requests must be honored when the information linked to their names is “inadequate, irrelevant or no longer relevant, or excessive in relation to the purposes of the processing at issue,” unless the public interest in finding this information is determined to outweigh the individual’s privacy rights.Footnote 115 Several civil society organizations have argued that the decision improperly placed private companies in the role of public authorities charged with balancing rights and interests. The counterpoint is that perhaps any actor that can impact an individual’s fundamental rights, as defined in the EU Charter, should assume this level of responsibility.

By providing an explicit legal link between the practices of some of the largest multinational corporations and human rights, EU law creates more opportunities for individuals to challenge the practices of large entities. Similarly, it increases the power of European authorities to regulate these companies, both in Europe and abroad. The CJEU’s decisions may also help to define the scope of companies’ responsibility to respect users’ privacy rights, a topic that is explored in greater depth in Chapter 11 of this volume.Footnote 116

As human rights norms become a greater foundation for data protection law, EU authorities are also increasingly applying the latter to data that crosses international borders. The next section examines how the challenge of cross-border data flows is gradually being met by developments in both international human rights law and EU data protection law. It also notes the outstanding dilemmas to which neither body of law has definitively spoken yet.

V Ensuring the Right to Privacy Extraterritorially

The privacy protections contained in human rights law have traditionally addressed states’ conduct regarding their subjects’ data within their own borders. But digital communications flow seamlessly across borders, challenging traditional paradigms of jurisdiction over individuals and information.Footnote 117 This means that privacy protections may be illusory when governments with sophisticated surveillance capabilities can access the communications data of people who are not subject to their jurisdiction.

A The Extraterritorial Application of the Right to Privacy

International human rights law provides little guidance as to the obligations of states vis-à-vis non-nationals located beyond their territories whose communications are targeted or simply swept up in bulk surveillance programs.Footnote 118 The ICCPR requires a state party “to respect and to ensure to all individuals within its territory and subject to its jurisdiction” the rights contained in the Convention without discrimination.Footnote 119 The Human Rights Committee and the ICJ have interpreted this language as a disjunctive, meaning that a state’s duty extends to “anyone within the power or effective control of that State Party, even if not situated within the territory of the State Party.”Footnote 120 A contrary interpretation would allow states to avoid their human rights obligations when exercising jurisdiction outside of their territories and be inconsistent with the object and purpose of the treaty.Footnote 121 The United States and Israel have disagreed with this position, and for many years the United States advocated a “strict territoriality” reading of Article 2 of the ICCPR, although its position seems to have softened in recent years.Footnote 122

When the European Court of Human Rights has addressed the extraterritorial conduct of its Contracting Parties, it has found effective control to be present in two types of situations: when state agents “exerci[se] control and authority over an individual” (the personal model of jurisdiction), or when a state occupies a foreign territory through military action and assumes responsibility for some or all of the public functions normally performed by the government in that territory (the spatial model).Footnote 123 Yet this analysis of the degree to which state agents exercise physical control over individuals is ill-suited to the nature of communications surveillance, where control over infrastructure and individuals is virtual.Footnote 124 Communications surveillance programs most often involve a state’s collection and review of data from its own territory, even though the communications may originate and terminate in other states and the rights holders may be beyond the collecting state’s jurisdiction.Footnote 125 Some types of collection more clearly involve extraterritorial action – e.g., a state’s interception of communications traffic via equipment located in its embassies abroad – but the impact on rights occurs in a different manner from the exercise of “effective control” over persons or territory.

Noting the mismatch between the prevailing test for extraterritorial obligations and the facts surrounding communications surveillance, several human rights experts have maintained that when analyzing a state’s exercise of jurisdiction, one should look at its control over rights rather than over individuals or territory. Therefore, in the context of communications surveillance, it is the assertion of authority in ways that affect the rights of individuals that triggers a state’s human rights obligations, even with respect to a person with no connection to that state.Footnote 126 For Marko Milanovic, in most (if not all) of the situations described in the Snowden documents, the state’s obligation to respect the human rights of impacted individuals outside of its territory should apply.Footnote 127 Consequently, the state’s interference with an individual’s privacy rights must be in pursuit of a legitimate aim and be a necessary and proportionate means of achieving that aim. The state’s positive obligation to ensure rights, however, would only apply to individuals located within its territory. Others would eschew the control test entirely and contend that laws that offer distinct protections based on the nationality or location of the subject of surveillance are difficult to justify under human rights law.Footnote 128

In The Right to Privacy in the Digital Age, the OHCHR found several of the aforementioned arguments regarding a state’s extraterritorial human rights obligations to be compelling at a high level, writing:

[D]igital surveillance therefore may engage a State’s human rights obligations if that surveillance involves the State’s exercise of power or effective control in relation to digital communications infrastructure, wherever found, for example, through direct tapping or penetration of that infrastructure. Equally, where the State exercises regulatory jurisdiction over a third party that physically controls the data, that State also would have obligations under the Covenant. If a country seeks to assert jurisdiction over the data of private companies as a result of the incorporation of those companies in that country, then human rights protections must be extended to those whose privacy is being interfered with, whether in the country of incorporation or beyond.Footnote 129

The report adds that, according to the principle of nondiscrimination contained in the ICCPR, states must respect the legality, necessity, and proportionality principles regardless of the nationality or location of the subject of communications surveillance.Footnote 130

The OHCHR explicitly declined to limit the scope of the state’s obligations to subjects of communications surveillance beyond its borders to that of merely respecting rights, in a manner similar to the statements of the ICJ and the Human Rights Committee. This leaves open the question of whether, under the ICCPR, a state may have a duty to ensure the rights of these individuals, even though the basis for jurisdiction may be a fleeting or virtual action. If this is the case, many of the obligations outlined above could flow to state action that has a definitive impact on the privacy rights of individuals beyond its territory. Extraterritorial surveillance would have to be based on laws that are consistent with international human rights standards and be subject to effective oversight. Any individual whose rights were impacted must have access to an effective remedy, and regulation of non-state actors would extend to extraterritorial actions as well.

The United States’ 2014 update to its signals intelligence policy, requiring that intelligence gathering “include appropriate safeguards for the personal information of all individuals” irrespective of their nationality or location,Footnote 131 is the most explicit action taken by a state to date to extend protections to those impacted by its extraterritorial surveillance. In light of the broad powers contained in the UK Investigatory Powers Act and other laws, more detailed interpretations of these obligations from UN mechanisms or from the European Court are needed to guide state action.

B EU Data Protection Law and Extraterritorial Privacy Protections

This chapter has argued that European authorities are interpreting data protection law in a way that fills the gaps in privacy protections left by international human rights law. As part of this effort, they are also increasingly applying EU data protection law extraterritorially, in an attempt to fill the void of uncertainty regarding the protections that adhere to individuals’ communications data when it crosses borders. In doing so, EU authorities may ultimately elevate privacy protections for communications well beyond the European continent.

The new EU General Data Protection Regulation and the proposed Privacy and Electronic Communications Regulation specify that they are binding on companies located outside of the EU that offer services to data subjects within the EU or otherwise monitor their behavior.Footnote 132 Since 1995, EU law has restricted the transfer of personal data outside of Europe to states that are deemed to have an adequate level of legal protection for the privacy rights of individuals.Footnote 133 The CJEU has interpreted this provision to mean that a third country must offer “a level of protection of fundamental rights and freedoms that is essentially equivalent to that guaranteed within the European Union” in order for general transfers to that country to be approved.Footnote 134 A multinational company may also transfer data to a state that has not been deemed adequate if the company commits to providing adequate safeguards.Footnote 135 Furthermore, the recently adopted EU-US Umbrella Agreement establishes privacy protections for the personal data of Europeans (as well as persons from the United States) in the context of criminal law enforcement cooperation.Footnote 136 With these instruments, EU authorities aim to achieve a baseline level of privacy protection for their subjects’ communications and other personal data vis-à-vis foreign actors from the private and public sectors, regardless of where they are located or where they handle that data.

National authorities in the EU are also seeking to apply EU data protection law extraterritorially by requiring companies to comply on a worldwide basis, as opposed to only with reference to sites directly aimed at the specific jurisdiction in question. For example, following the CJEU’s Google Spain decision, French data protection authorities ordered Google to de-index search results that fit the judgment’s criteria on a global scale, in order to protect data subjects’ privacy rights more effectively.Footnote 137 Google had previously ensured that no users located in the European Union could access de-indexed results, but French authorities seek to make de-indexing decisions applicable across the global Internet. If upheld on appeal, this judgment could extend the reach of certain European data protection norms internationally.Footnote 138

In addition to strengthening protections for the privacy rights of Europeans regardless of where their data flows, the European approach may also elevate privacy protections for individuals outside of the region. A handful of non-EU states have been designated as having adequate data protection standards by the European Commission, and this stable basis for data transfer is attractive for trading partners. In the wake of the Snowden revelations, the CJEU used this mechanism to push for changes in US surveillance law. In the Schrems case of 2015, the CJEU invalidated the European Commission’s decision that the US legal regime offered an adequate level of protection for data subjects under the Safe Harbor Agreement reached between the US government and the European Commission. The CJEU determined that “legislation permitting the public authorities to have access on a generalised basis to the content of electronic communications” for national security purposes was inconsistent with the right to privacy.Footnote 139

The Schrems decision had the potential to halt a significant portion of the transatlantic flow of personal data, prompting US and EU authorities to negotiate the Privacy Shield agreement as a replacement.Footnote 140 US authorities have also supplemented the agreement with detailed explanations of US surveillance law and practice. Nevertheless, the adequacy of the US legal regime continues to be impugned.Footnote 141 States beyond Europe are also following the region’s example when updating data protection laws, by limiting the legal bases for collecting personal data and restricting the flow of data to states that are deemed adequate.Footnote 142 Thus, the ultimate legacy of the Schrems case may be a gradual harmonization of data protection standards among key parts of the data economy, with EU rules serving as the foundation.

Despite the evolution of international human rights law and EU data protection law regarding privacy and cross-border data flows, clear rules have not yet emerged to address which state’s privacy protections should apply to communications data when multiple governments assert jurisdiction over it.Footnote 143 In a case involving Microsoft in the United States, a federal appeals court ruled that the location of the data should determine which state may claim jurisdiction (and which privacy protections apply).Footnote 144 The UK Investigatory Powers Act allows the government to issue extraterritorial warrants for communications data if the data is held by a company that is subject to its regulatory jurisdiction.Footnote 145 For Jennifer Daskal, both approaches to jurisdiction are unsatisfactory, given the mobility of data, the incentives for companies and governments to decide who may access data based on where it is stored, and the conflict of laws which companies may face.Footnote 146 Instead, Daskal poses that the law should allow for multiple jurisdictional triggers to be evaluated, including the nationality and location of the data subject.Footnote 147 The absence of clear rules on jurisdiction and privacy protections in this scenario has led to calls for international law to fill the void through the negotiation of an international treatyFootnote 148 or smaller bilateral or multilateral agreements.Footnote 149 From a human rights perspective, the OHCHR’s position should guide the development of any such framework: The privacy protections that attach to a person’s communications when she transits borders or when jurisdiction is disputed should be those that are contained in international human rights law. Any state that impacts those rights – by accessing the data or sharing it with another state – should be required to ensure those protections. UN experts and the European Court of Human Rights can support efforts to establish robust and predictable privacy protections that transcend borders by continuing to develop standards on the universality of privacy rights in the digital age.

VI Conclusion

Developments in communications technology, coupled with revelations by Edward Snowden and others, have demonstrated that while human rights law has a well-developed body of standards on the right to privacy in communications, there are key areas where these standards fall short. The bulk collection of communications data seems generally permitted but circumscribed in human rights law, although few states appear to conduct such surveillance in accordance with these limits. Rules regarding the protections that apply to communications and other personal data when they are in the hands of private companies or when they transit borders are evolving, but at present are incomplete.

The most impactful recent development in this space may be the interpretation of EU data protection law in a way that incorporates or converges with the right to privacy. EU institutions are using data protection norms and enforcement mechanisms to give individuals stronger protections against the public and private actors that access their communications, regardless of location. This approach has the potential to contribute to stronger privacy protections beyond Europe, as its norms are increasingly replicated by other states seeking determinations of adequacy. Ideally, the European approach will also prompt UN mechanisms and governments to come together to devise more global solutions for the protection of privacy in the digital age, with international human rights law as their foundation.

11 Human Rights and Private Actors in the Online Domain

Rikke Frank Jørgensen
I Introduction

The UN Special Rapporteur on the Promotion and Protection of the Right to Freedom of Opinion and Expression recently stated that the Internet has become the central global public forum with profound value for human rights.Footnote 1 In this global public forum, control over infrastructure and services is largely in the hands of companies, with some of the most powerful being from the United States. To participate in the online sphere, individuals must engage with online platforms such as Google and Facebook and rely on them for exercising rights and freedoms such as freedom of expression, freedom of information, and freedom of assembly.

In this sense, these companies have increasing power to influence rights in the online domain. The power of the major platforms flow from their control over a wide range of resources crucial to information search and public participation in the online realm. In 2013, The New York Times had a print and digital circulation of nearly two million and claimed to be the most visited newspaper site, with nearly thirty-one million unique visitors every month. YouTube, in contrast, had one billion unique visitors a month in 2014, or as many in a day as The New York Times has in a month.Footnote 2 In terms of company valuations, as of April 2014 The Times’s market value was around 1 percent of the value of Facebook or Google.Footnote 3 By the end of 2015, Facebook had more than 1.6 billion users a monthFootnote 4 and Google more than one billion searches a month.Footnote 5

Online platforms are used every day by billions of people to express themselves and to comment on, debate, critique, search, create, and share views and content. As such, the Internet’s distributed architecture and the decrease in communications costs have fundamentally altered the capacity of individuals to be active participants in the public sphere. On a positive note, this networked public sphere facilitates new means for civic engagement, public participation, social change, and countering repressive governments.Footnote 6 On a more cautious note, scholars have warned that the new infrastructure for exercising freedom of expression carries with it new modalities of interference with fundamental rights, and that adequate legal responses have yet to be found.Footnote 7

One area of concern – not least among legal scholars, data protection authorities, and groups set up to protect fundamental rights on the InternetFootnote 8 – is the privatized law enforcement and self-regulatory measures of these corporate platforms. The concern is particularly related to the platforms’ means of “content regulation” and privacy practices; for example, their day-to-day decisions on which content to remove or leave up, and the extent to which they collect, process, and exchange personal data with third parties.Footnote 9 Several cases in the United States and Europe have addressed this concern, and new cases continue to appear.Footnote 10 Scholars have also warned of a governance gap, where private actors with strong human rights impacts operate within the soft regime of guidelines and corporate social responsibility with no direct human rights obligations.Footnote 11 International human rights law is binding on states only, and despite an increasing take-up of human rights discourse within Internet companies, their commitment remains voluntary and nonbinding. In addition, limited information is available in the public domain concerning the corporate practices that affect freedom of expression and privacy.

Although a part of public discourse has always unfolded within private domains, from coffeehouses to mass media, the current situation is different in scope and character. In the online realm, the vast majority of social interactions, discussions, expressions, and controversies take place on platforms and services provided by private companies. As such, an increasing portion of our sociality is conducted in privately owned spaces. In addition, these practices are entangled in a business model in which the conversations and interactions that make up online life are directly linked to revenue. This arguably represents yet another stage of the trend of privatization. Prior examples include the dominance of corporate-owned media over the civic public sphere, the outsourcing of government functions to private contractors, and the reduction of public spaces to malls and privately owned town squares.Footnote 12 However, the increasing significance of online platforms for public life gives rise to a large number of unresolved questions related to the techno-social design, regulation, and human rights impact of these companies as “curators of public discourse.”Footnote 13

As several scholars have argued, these online platforms have an enormous impact on human rights globally through the policies they adopt for their users. Within “Facebookistan” and “Twitterland,”Footnote 14 these polices have just as much validity as traditional legal rules and standards.Footnote 15 Moreover, the companies have wide discretion in enforcing the policies, as they weigh potential precedents, norms, competing interests, and administrability in developing the rules of expression and privacy that effectively govern their users worldwide. Arguably, Google’s lawyers and executives have as much power to determine who may speak and who may be heard around the world than does any president, king, or Supreme Court justiceFootnote 16 – or, as expressed by Marvin Ammori, “Technology lawyers are among the most influential free expression lawyers practicing today.”Footnote 17 At the same time, the core business of these companies is built around expression, and most of them talk about their business in the language of freedom of expression and freedom of information. Google’s official mission is “to organize the world’s information and make it universally accessible and useful.”Footnote 18 Twitter stresses that its goal is “to instantly connect people everywhere to what is most meaningful to them. For this to happen, freedom of expression is essential.”Footnote 19 Twitter also states that tweets must flow as a default principle. Facebook’s vision is to “give people the power to share and make the world more open and connected.”Footnote 20

In relation to privacy, the online infrastructure of free expression is increasingly merging with the infrastructure of content regulation and surveillance. The technologies, institutions, and practices that people rely on to communicate with one another are the same technologies, institutions, and practices that public and private parties employ for surveillance.Footnote 21 The online infrastructure simultaneously facilitates and controls freedom of expression, surveillance, and data mining. As such, it has become a new target for governments and corporate interests alike.

Since 2009, several of the major Internet companies have upgraded and formalized their human rights commitment. Most notably this has been via industry initiatives, such as the Global Network Initiative, that focus on a company’s compliance with international human rights standards on privacy and freedom of expression.Footnote 22 Also, as Lisl Brunner points out in Chapter 10, in the wake of the NSA contractor Edward Snowden’s revelations of state surveillance, there has been increasing public focus on the exchange of personal data between Internet companies and government agencies. As a result, several companies have started to publish transparency reports to document (at an aggregated level) the numbers and types of content removal requests they receive and accommodate.Footnote 23 Despite these efforts, there is still limited public knowledge of companies’ internal mechanisms of governance; e.g., how they decide cases with freedom of expression implications or how they harness user data.Footnote 24 As illustrated by a number of cases in the European Union as well as in Europe more broadly, a number of human rights related practices continue to cause concern among scholars and regulators alike.Footnote 25

Using the example of Internet companies, this chapter will critically examine current challenges related to human rights protection in the online domain. This will include questions such as: How shall we understand the role of Internet companies vis-à-vis freedom of expression? What does human rights law – and soft law such as the UN Guiding Principles on Business and Human Rights – say about private actors and their human rights responsibilities? How have major Internet companies taken up these challenges in their discourse and practices? What are some of the dynamics that work for or against stronger human rights protection online? And are the frameworks that currently govern the activities of these Internet companies sufficient to provide the standards and mechanisms needed to protect and respect human rights online?

II The Role of Internet Companies in the Online Domain

Over the past ten years, the Internet’s potential positive and negative impacts on human rights have been iterated time and again by the UN World Summit on the Information Society,Footnote 26 the UN Human Rights Council,Footnote 27 and UN thematic rapporteurs.Footnote 28 The former UN Special Rapporteur on the Promotion and Protection of Freedom of Opinion and Expression, Frank La Rue, for example, has emphasized the unprecedented opportunity presented by the Internet to expand the possibilities for individuals to exercise a wide range of human rights, with freedom of opinion and of expression as prominent examples.Footnote 29 Special Rapporteur La Rue also expressed concerns about the multiple measures taken by states to prevent or restrict the flow of information online, and he highlighted the inadequate protection of the right to privacy on the Internet.Footnote 30 Of specific relevance to this chapter is his emphasis on the way private actors may contribute to violating human rights online, given that Internet services are run and maintained by companies.Footnote 31 In parallel to this, policy reports and scholarship have increasingly addressed the specific challenges related to human rights protection in the online domain.Footnote 32

It is now widely recognized that access to the Internet and participation in discourse through the Internet have become integral parts of democratic life. What is less debated is the fact that facilitating this democratic potential critically relies on private actors. Access to the Internet takes place through Internet service providers, information search is facilitated by search engines, social life plays out via online platforms, and so on. Despite the increasing role that these private actors play in facilitating democratic experience online, the governance of this social infrastructure has largely been left to companies to address through corporate social responsibility frameworks, terms of service, and industry initiatives such as the Global Network Initiative.Footnote 33 Moreover, there is limited research critically assessing the frameworks that govern the activities of these Internet companies and questioning whether they are sufficient to provide the standards and compliance mechanisms needed to protect and respect human rights online.

The Internet’s democratic potential is rooted in its ability to promote “a culture in which individuals have a fair opportunity to participate in the forms of meaning making that constitute them as individuals.”Footnote 34 Democratic culture in this sense is more than political participation; it encompasses broad civic participation where anyone, in principle, may participate in the production and distribution of culture. This democratic potential is linked to the Internet’s ability to provide its users with unprecedented access to information and to decentralized means of political and cultural participation.Footnote 35 By decentralizing the production of content, supplementing mass media with new means of self-expression, and enabling collective action across borders, the Internet has the potential to be a more participatory public sphere. This potential has been widely addressed in the body of literature that considers the Internet as a new or extended public sphere, yet with limited evidence of the actual democratic impact of these new modalities.Footnote 36 Moreover, the democratic implications of having private actors with no public interest mandate controlling the sphere is still not sufficiently clear, yet several challenges surface.

A No Public Streets on the Internet

In the United States, the protections of a speaker’s right to speech vary based on the chosen forum. The Supreme Court distinguishes among three types of forums: traditional public forums, designated forums, and nonpublic forums.Footnote 37 The traditional public forum doctrine protects speech in public places such as streets, sidewalks, and parks, which are traditionally recognized as being held in common for the public good.Footnote 38 Expressive activity in these spaces can, in specific and narrowly defined cases, be subject to “time, place, and manner restrictions,” but only in exceptional cases can such restrictions be based on the messages themselves.Footnote 39 In contrast, the owners of private property are relatively free in the restrictions they may place on the speech that takes place on their property.

When Internet users search for information, express opinions, debate, or assemble, they largely do so within privately owned forums. Accordingly, the company that provides the service is free to set the conditions for allowed expressions and actions on its platform. As Stacey Schesser explains, “Each private URL owner controls the traffic on his or her website, therefore limiting the application of the First Amendment to the site. Although a website author may choose not to censor postings on her blog or remove discussion threads on his bulletin board, each URL owner retains the right to do so as a private actor.”Footnote 40 Legally speaking, the online sphere holds no public streets or parks, and social media platforms such as Facebook and Google Plus do not constitute public forums, but rather private property made open to the public. In line with this, there is no First Amendment protection of speech on these platforms. On the contrary, the communications that users provide as they tweet or contribute to Facebook, Google, or LinkedIn is largely private property, owned by the company that provides the service.Footnote 41

Moreover, these companies have broad power to restrict speech that would otherwise be protected by the First Amendment. The highly praised liability regime for online Internet services in the United States, which immunizes intermediaries from liability for third-party contentFootnote 42 as codified in Section 230 of the Communication Decency Act, effectively gives Internet companies the discretion to regulate content. Without Section 230, Internet companies could be secondarily responsible for the content posted on their platforms, including defamatory speech, if they took steps to censor this content to remove speech that might be offensive to other users. Section 230’s so-called Good Samaritan provision protects Internet services from liability if they restrict access to material or give others the technical means to do so.Footnote 43

B Online Gatekeepers

In an attempt to categorize the Internet companies in control of the online public sphere, Emily Laidlaw focuses on their democratic impact, identifying three different types of gatekeepers: micro gatekeepers, authority gatekeepers, and macro gatekeepers.Footnote 44 According to this typology, macro gatekeepers maintain significant information control due to their size, influence, or scope, and due to the fact that users must pass through them to use the Internet. Examples of companies in this category would be Internet service providers, mobile network providers, and major search engines. Authority gatekeepers control high amounts of information traffic and information flow, although users are not dependent on them to use the Internet. Examples include sites such as Wikipedia and Facebook. In contrast, micro gatekeepers are sites that play a less important role as sources of information, but still facilitate information and debates of democratic significance, such as certain news sites.Footnote 45 Laidlaw’s framework suggests that the human rights obligations of Internet gatekeepers should increase when they have the power to influence democratic life in a way traditionally reserved for public bodies. The scale of responsibility is reflected not only in the reach of the gatekeeper, but also in the infiltration of that information, process, site, or tool in democratic cultureFootnote 46.

C Expressions Are Products

The current communications environment is also unique because user expressions constitute the products on which the business models of Internet companies are built. The business models of most, if not all, of the major online services are based on targeted advertising, which means that when individuals participate online – for example, by engaging in conversation or searching for information – these actions are captured, retained, and used for advertising purposes and, as such, constitute products that feed into the online business model. This is essentially different from the predigital age, when individuals’ conversations, social networks, preferences, and information searches were neither captured nor the core element of the intermediary’s business model.

Because expressions are products, the relationships that people have with Internet companies are fundamentally different from traditional company-customer relationships. As Bruce Schneier explains:

Our relationship with many of the internet companies we rely on is not a traditional company-customer relationship. That’s primarily because we’re not customers. We’re products those companies sell to their real customers. The relationship is more feudal than commercial. The companies are analogous to feudal lords, and we are their vassals, peasants, and – on a bad day – serfs. We are tenant farmers for these companies, working on their land by producing data that they in turn sell for profit.Footnote 47

Although this feudal analogy may appear extreme, Schneier reminds us that what appear to be free products are not. The information and communications that users provide when using the services are essential elements in the online business model and, as such, represent the core source of income for the companies.

There should be nothing new or controversial about an Internet company seeking to optimize its revenue via advertising. The disturbing bit is that these platforms de facto control large chunks of the online public sphere and users have limited choice to opt out of the business scheme. There are no public streets on the Internet, and there are limited means of participating in political or cultural life outside the commercial realm. Moreover, contributing to the online economy via online expressions, habits, and preferences has become a premise for participation in the networked public sphere. Thus, according to Schneier: “It’s not reasonable to tell people that if they don’t like data collection, they shouldn’t e-mail, shop online, use Facebook, or have a cell phone… . Opting out just isn’t a viable choice for most of us, most of the time; it violates what have become very real norms of contemporary life.”Footnote 48

On an equally skeptical note, Shoshana Zuboff argues that the economic characteristics of the online business model are in the process of undermining long-established freedoms and represent a largely uncontested new expression of power.Footnote 49 Scholars such as Julie Cohen and Niva Elkin-Koren have cautioned that the digital era represents threats to fundamental freedoms whose ramifications we are yet to understand.Footnote 50 Elkin-Koren notes, “As information becomes crucial to every aspect of everyday life, control over information (or lack thereof) may affect our ability to participate in modern life as independent, autonomous human beings.”Footnote 51

Thus, access to the Internet and participation in discourse through the Internet have become integral parts of modern life. The exercise of this public life, however, takes place almost exclusively via privately owned platforms. Moreover, it is entangled in a business model in which knowledge of individual behavior and preferences is closely linked to revenue. In effect, this means that private actors have unprecedented power to impact the way that billions of users are able to express themselves, search and share information, and protect their privacy. Yet as private actors, they remain largely outside the reach of human rights law.

In the following, I will examine some of the legal and extralegal dimensions of this challenge. First, what does human rights law say about the obligations of private actors? Second, how have the companies themselves responded to these challenges? And third, do these approaches suffice to protect human rights online?

III Human Rights Law and Private Actors

Human rights law is state-centric in nature in the sense that states – not individuals, not companies – are the primary duty bearers. Legally speaking, only the state can be brought before a human rights court, such as the European Court of Human Rights, and examined for alleged human rights violations. Part of this obligation, however, is a duty upon the state to ensure that private actors do not violate human rights, referred to as the horizontal effect of human rights law. National regulation related to labor rights or data protection, for example, serves as machinery for enforcing human rights standards in the realm of private parties.

Whereas human rights law is focused on the vertical relation (state obligations to the individual), it recognizes the horizontal effect that may arise in the sphere between private parties.Footnote 52 The horizontal effect implies a state duty to protect human rights in the realm of private parties, for example, via industry regulation. A large amount of the literature related to online freedoms has been occupied with new means of state interference with human rights, for example, through new means of restricting content, engaging in surveillance, or involving Internet companies in law enforcement. These new means of state interference have been explored in several comprehensive studies, for example, by the Open Net InitiativeFootnote 53 and by scholars such as Jack Balkin, who have examined the characteristics of “old-school” (pre-Internet) versus “new school” speech regulation. In contrast, less attention has been paid to the implications that arise in the sphere of horizontal relations, such as when companies, on their own initiative, remove content because it violates their terms of service, or when they exchange personal data with third parties as part of their business model. In the analysis that follows, emphasis will be on horizontal relations and the human rights duties and responsibilities that may be invoked in this realm.

Over the past decade, the interface between human rights law and private actors has been the focus of considerable attention, resulting in the adoption of broad soft law standardsFootnote 54 and the launch of many multistakeholder initiatives, including the UN Global Compact. The UN Global Compact represents one of the core platforms for promoting corporate social responsibility (CSR), a concept that refers to a company’s efforts to integrate social and environmental concerns into its business operations and stakeholder interactions. According to the UN Global Compact’s framing of corporate social responsibility, businesses are responsible for human rights within their sphere of influence. While the sphere of influence concept is not defined in detail by international human rights standards, it tends to include the individuals to whom a company has a certain political, contractual, economic, or geographic proximity.Footnote 55 Arguably, CSR has some normative base in the human rights discourse, but these rights have not been well integrated:

On the whole, relatively few national CSR policies or guidelines explicitly refer to international human rights standards. They may highlight general principles or initiatives that include human rights elements, notably the OECD Guidelines and the Global Compact, but without further indicating what companies should do operationally. Other policies are vaguer still, merely asking companies to consider social and environmental “concerns,” without explaining what that may entail in practice.Footnote 56

Even where CSR pays attention to human rights, it primarily addresses social and economic rights, in particular as it relates to working conditions and environmental and community impact, with limited attention to civil and political rights.Footnote 57 The critique of the CSR framework that it was too limited in scope, with a focus on selected rights only, was one of the drivers of the work of John Ruggie, who served as the special representative to the secretary general on issues of human rights and transnational corporation from 2005 to 2011.

In 2011, Ruggie’s work culminated with an endorsement of the United Nations’ Guiding Principles on Business and Human Rights (UNGP).Footnote 58 The UNGP provides a set of principles that states and businesses should apply to prevent, mitigate, and redress corporate-related human rights abuses. Contrary to the sphere of influence approach, the UNGP focuses on the potential and actual human rights impact of any business conduct.Footnote 59 The UNGP elaborates the distinction that exists between the state duty to protect human rights and the corporate responsibility to respect human rights based on three pillars, often called the “Protect, Respect, and Remedy” framework. The first pillar (Protect) focuses on the role of the state in protecting individuals’ human rights against abuses committed by non-state actors; the second pillar (Respect) addresses the corporate responsibility to respect human rights; and the third pillar (Remedy) explores the roles of state and non-state actors in securing access to remedy. Ruggie’s report to the Human Rights Council, which provided the basis for the UNGP, explains:

Each pillar is an essential component in an inter-related and dynamic system of preventative and remedial measures: the State duty to protect because it lies at the very core of the international human rights regime; the corporate responsibility to respect because it is the basic expectation society has of business in relation to human rights; and access to remedy because even the most concerted efforts cannot prevent all abuse.Footnote 60

The second pillar affords a central role for human rights due diligence by companies. Due diligence comprises four steps, taking the form of a continuous improvement cycle.Footnote 61 Companies must publish a policy commitment to respect human rights. As part of its due diligence process, a company must assess, using a human rights impact assessment, the actual and potential impacts of its business activities on human rights; remediate the findings of this assessment into company policies and practices; track how effective the company is in preventing adverse human rights impacts; and communicate publicly about the due diligence process and its results. Companies are expected to address all their impacts, though they may prioritize their actions. The UNGP recommends that companies first seek to prevent and mitigate their most severe impacts or those where a delay in response would make consequences irremediable.Footnote 62

Since the corporate responsibility to respect human rights refers to all internationally recognized human rights, not just those in force in any one particular jurisdiction,Footnote 63 human rights due diligence should encompass, at minimum, all human rights enumerated in the International Bill of Human Rights.Footnote 64 The UNGP guidance on human rights impact assessments remains at a general level, without detailed descriptions of the process or orientation on how it should be adapted to particular industries. Various initiatives have since attempted to address this, which we will return to below.Footnote 65

Whereas pillars one and three combine existing state obligations under international human rights law with soft law recommendations, pillar two is soft law only, reflecting the lack of direct human rights obligations for companies under international law.Footnote 66 The debate on whether and how to create binding human rights obligations for companies has been ongoing for more than two decades, but there is little indication that companies will be bound by human rights law in the foreseeable future.Footnote 67

With regard to the state duties, the UNGP reiterates two existing human rights obligations. First, states must protect against human rights abuses within their territory and jurisdiction by third parties,Footnote 68 and second, states must provide individuals access to remedies for human rights abuses.Footnote 69 According to the first obligation, the state is required to take appropriate steps to prevent, investigate, punish, and redress private actors’ human rights abuses that take place in its jurisdiction. Such steps include effective policies, legislation, and regulation; access to remedies; adjudication; and redress. The second obligation iterates that states must take appropriate steps to ensure that injured parties have access to effective remedies when business-related human rights abuses occur within the state’s territory or jurisdiction. This includes remedies provided via judicial, administrative, legislative, or other appropriate means.

In line with this, the case law of the European Court of Human Rights (ECtHR) confirms that states have an obligation to protect individuals against violations by business enterprises. This entails an obligation to protect individuals against violations by business enterprises as third parties as well as those acting as state agents. In the first case, the human rights violation is constituted by the state’s failure to take reasonable measures to protect individuals against abuse by business enterprises; in the latter, the abusive act of the business enterprise is attributed to the state, so that the state is considered to directly interfere with the rights at stake.Footnote 70 The case law of the ECtHR on violations by business enterprises acting as state agents concerns both the case where the state owns or controls business enterprises and the case where private corporations exercise public functions through procurement contracts and privatization of public services.Footnote 71

Ruggie’s framework, which has been widely praised and endorsed by states as well as business enterprises, has also been criticized for its slow uptake, its ineffectiveness, and for not creating binding obligations on companies.Footnote 72 Yet, a hard-law punitive approach has also long had its skeptics, and numerous empirical studies have spoken to the significance of social factors, both internal and external, in affecting companies’ behavior.Footnote 73

The UNGP has resulted in several follow-up initiatives at both the global and regional level. At the global level, a UN working group on human rights and transnational corporations and other business enterprises was established in June 2011 to promote the effective and comprehensive dissemination and implementation of the UNGP.Footnote 74 After completing its initial three-year appointment in 2014, the group had its mandate extended for another three-year term.Footnote 75 The group has, among other things, produced a “Guidance” on the development of national action plans on business and human rights.

At the European level, the European Commission has produced sector-specific guides on UNGP implementation in relation to three business sectors, including the information and communication technology (ICT) sector.Footnote 76 The guide is not a legally binding document, but translates the expectations of the UNGP to the specifics of the business sector at a rather generic level. In relation to the ICT sector, the guide stresses that the right to privacy and to freedom of expression can be particularly impacted by companies in the ICT sector.Footnote 77 The guide focuses on the state pressure that companies may be subjected to when they operate in contexts where the national legal framework does not comply with international human rights standards (i.e., a vertical conflict). In contrast, the negative human rights impact that may flow from the company’s governance of content or tracking of user behavior is not addressed, and, as such, the guide provides limited guidance on horizontal conflict (i.e., relations between private actors). This focus on the vertical conflict is also dominant in the Global Network Initiative (addressed below) and indicates that the human rights discourse by Internet companies tends to highlight push-back strategies against illegitimate government requests, with less attention being paid to the human rights impact of the company’s own actions.

This points to an unanswered question: What would human rights law and supplementary guidelines such as the UNGP say about the responsibility of private actors that potentially affects the rights of billions of individuals worldwide?

As stated above, states are obligated to prevent human rights violations by private actors, and private actors have a moral obligation to respect human rights. States cannot delegate their human rights obligations to a private party, and they are obligated to ensure that appropriate regulations result in human rights–compliant business practices. Moreover, each company has a responsibility to assess its actual human rights impact, i.e., the way that its operational practices, services, and products impact on its users’ human rights.

The state obligation to ensure human rights entails both a positive and negative element. It requires the state to refrain from certain conduct, but also to take positive steps to ensure the enjoyment of the right in question. Freedom of expression, for example, requires that the state refrain from engaging in censorship, but also that it – via national regulation – enables freedom of the press.Footnote 78 The measures and behavior required of businesses to fulfill their responsibility to respect human rights should be provided for by each state’s respective national laws and policies in all the various areas in which these laws and policies touch on business activities.Footnote 79

Arguably, in many specific cases, such regulation exists and businesses do, to a large extent, respect human rights standards by complying with legal rules. It would be too optimistic, however, to assume that governments and subordinate public authorities always have the ability and the will to regulate business conduct in line with human rights requirements,Footnote 80 not least in relatively new policy areas such as online freedom of expression. Moreover, in the case of online service providers, there is an additional layer of responsibility. Not only does the company have responsibilities in relation to its employers and community, it also directly or indirectly affects its users, who in practice might be billions of people.

Historically, controversial cases have involved privacy and freedom of expression in particular, yet with some legal subtleties that distinguish the two rights in question. As Lisl Brunner notes in Chapter 10, the right to privacy has some protection in national legislation (in particular in Europe) in the form of data-protection laws that stipulate principles, procedures, and safeguards that public and private actors must adhere to when collecting and processing personal data.Footnote 81 In the EU context, for example, Google is subject to the European Data Protection Directive, which imposes conditions and safeguards for data collection, processing, and exchange on public institutions and private companies alike. When Google, as in the Google Spain case, violates a user’s right to privacy, the company is the direct duty bearer under Spanish data protection legislation.Footnote 82

In contrast, Internet platforms are rarely subject to regulation concerning the negative impact they may have on freedom of expression. When content is filtered, blocked, or taken down by Twitter because it allegedly violates the community standards, there is limited guidance in international human rights law, and rarely is there national legislation that applies. In these situations, the company is acting in a judicial capacity, deciding whether to allow content to stay up or to remove it according to internal governance practices and standards, but without the human rights requirements that would apply if Twitter were a state body rather than a private company. For example, if Twitter removes posts for violating its community standards, this does not trigger international human rights law. In contrast, if a state-owned Twitter were to remove content from the public domain, this practice would have to follow the three-part test governing limits on freedom of expression. According to the three-part test, any limitation on the right to freedom of expression must be provided by law that is clear and accessible to everyone; it must pursue one of the purposes set out in Article 19, paragraph 3 of the ICCPR; and it must be proven as necessary and the least restrictive means required to achieve the purported aim.Footnote 83

A related challenge concerns the cases where content is taken down because it allegedly violates national law in the country of operation. As mentioned, Internet services hosted in the United States are insulated from liability under Section 230 of the Communications Decency Act. Concerning copyright infringements, however, Section 512 of the Digital Millennium Copyright Act codifies limited liability, which means that Internet services are required to remove alleged illegal content when notified of its presence on their service, or they will face liability for that content.Footnote 84 This is similar to the European approach, which imposes limited liability on online services through the Electronic Commerce Directives. Both regimes have been criticized for encouraging businesses to privately regulate their affairs, with freedom of expression implications.Footnote 85 When Facebook, for example, acts upon an alleged copyright violation by removing the content, it is making decisions with freedom of expression implications, yet as a private actor it is not obligated to follow the three-part test prescribed by human rights law. The regimes that insulate online platforms from liability for the third-party content they carry also effectively insulate them from liability when they take down protected content out of fear of liability (e.g., alleged copyright infringement).

In sum, the practices of online platforms (especially macro or authority gatekeepers) have effects on freedom of expression and privacy far beyond their roles as employers and members of a community. Do the power, influence, and capacity to affect democratic life qualify for a special class of public interest companies that invite additional corporate responsibilities beyond the duty to respect human rights?Footnote 86 Does it accentuate the positive obligation on the state to legislate the obligations of these companies? Although Ruggie briefly touched upon these issues (with prisons as an example), there is limited guidance in his work as to the answers. In addition, although these companies’ negative impact on privacy is regulated in some regions of the world, their potential negative impact on freedom of expression is not. Neither the United States nor Europe has regulations to protect against the potential negative impact that the content-regulation practices of a major Internet company could have on freedom of expression. Moreover, the human rights responsibilities of Internet companies are largely discussed in relation to illegitimate government requests, as exemplified by the Global Network Initiative, addressed below.

The above challenges are rooted in the gray zone where human rights law ends and corporate social responsibility begins, and it is in this zone that online platforms operate. Their practices may affect human rights immensely, yet they are not regulated with a view to their impact on freedom of expression, freedom of information, or privacy, except in some specific cases. Moreover, even when Internet companies are subjected to regulation, such as on data protection, the past ten years have illustrated the tremendous challenge of holding those companies accountable to these standards. As such, there is a lacuna in checks and balances concerning these private actors. This is paradoxical, since online, these private actors are at the center of the Internet’s democratizing force and exercise significant human rights impacts on their users.

IV The Uptake of Human Rights within Internet Companies

As previously mentioned, most human rights cases related to the ICT sector have concerned freedom of expression and the right to privacy. In December 2008, this led to the launch of the first industry initiative concerned with the human rights compliance of Internet companies, the Global Network Initiative, addressed below. First, however, it should be noted that most major global platforms emphasize freedom of expression as a core element of their business. Facebook’s mission, for example, is framed in the language of freedom of expression and association by its founder, Mark Zuckerberg:

There is a huge need and a huge opportunity to get everyone in the world connected, to give everyone a voice and to help transform society for the future… . By giving people the power to share, we are starting to see people make their voices heard on a different scale from what has historically been possible. These voices will increase in number and volume. They cannot be ignored. Over time, we expect governments will become more responsive to issues and concerns raised directly by all their people rather than through intermediaries controlled by a select few.Footnote 87

At Twitter, the company vision is closely linked to freedom of expression and the new digital means of realizing this right: “Our legal team’s conceptualization of speech policies and practices emanate[s] straight from the idealism of our founders – that this would be a platform for free expression, a way for people to disseminate their ideas in the modern age. We’re here in some sense to implement that vision.”Footnote 88 Google stresses that the company “[has] a bias in favor of people’s right to free expression in everything we do.”Footnote 89

The Global Network Initiative (GNI) has, since 2008, been the common venue for some of the major Internet companies’ discourse on human rights norms related to freedom of expression and privacy.Footnote 90 The GNI is a multistakeholder group of companies, members of civil society, investors, and academics that was launched in the United States. Formation of the GNI took place against the backdrop of two particular incidents. One was Yahoo’s handover of user information to Chinese authorities, thereby exposing the identity of a Chinese journalist, leading to his arrest and imprisonment. The second was Google’s launch of a censored search engine in China.Footnote 91

The goal of the GNI is to “protect and advance freedom of expression and privacy in the ICT sector.”Footnote 92 At the time of writing, Google, Yahoo, Facebook, Microsoft, and LinkedIn were the Internet company members, whereas seven of the big telecommunication companies – united in the parallel initiative Telecommunications Industry Dialogue – were admitted as members in 2017.Footnote 93 The baseline for GNI’s work consists of four core documents, developed in broad collaboration among the participants: the “Principles,” the “Implementation Guidelines,” the “Accountability, Policy and Learning Framework,” and the “Governance Charter.” The Implementation Guidelines operationalize the overall principles in detailed guidance to companies, whereas the Governance Charter describes how the GNI is governed in order to ensure integrity, accountability, relevance, effectiveness, sustainability, and impact. The Accountability, Policy, and Learning Framework supplements the Governance Charter with more detail on how the work of the GNI is carried out.Footnote 94

Since its inception, the GNI has been criticized for lack of participation (including by smaller and non-US companies), for not being independent enough in the assessment process,Footnote 95 for the lack of a remedy mechanism, for insufficient focus on privacy by design, and for a lack of accountability.Footnote 96 These criticisms speak to the inherent challenge of having an industry define its own standards and procedures for respecting users’ rights to privacy and freedom of expression. Moreover, it has been argued that the protection of users’ rights runs contrary to business interests.Footnote 97 In relation to the latter challenge, it is important to note some fundamental differences between the rights in question and the challenges they pose.

Both privacy and freedom of expression protect individual freedoms by setting limits on state (and private actor) intrusion. With privacy, these limits are formulated as principles that guide how and when personal information may be collected, processed, and exchanged with a third party. In relation to data protection, it seems paradoxical to expect that the boundaries for data collection and use will be most effectively protected by companies whose business model is built around harnessing personal data as part of their revenue model. Whereas companies may push back against illegitimate government requests for user data, they are less likely to be a sufficiently critical judge of their own business practices, not least when these are closely linked to their business model.

With freedom of expression, the issue is slightly different. Here, the potential conflict between human rights standards and business practices stems from several factors, more indirectly linked to the revenue model. These factors include: unclear liability regimes that might incentivize the company to remove alleged illegal content without sufficient due process safeguards and that position the company as the final authority regarding which content to remove; pressure from governments to block, filter, or remove content; and internally defined standards regarding content moderation and enforcement of the standards.

As reflected in its baseline documents, the GNI is strongly anchored in the initial narrative of providing guidance to Internet companies in countries where local laws conflict with international human rights standards, rather than the systematic human rights impact assessment suggested by the UNGP. The GNI Principles state:

The right to freedom of expression should not be restricted by governments, except in narrowly defined circumstances based on internationally recognized laws or standards… . Participating companies will respect and protect the freedom of expression rights of their users when confronted with government demands, laws and regulations to suppress freedom of expression, remove content or otherwise limit access to information and ideas in a manner inconsistent with internationally recognized laws and standards.Footnote 98

Similarly, the Implementation Guidelines for Freedom of Expression discuss company practices in relation to “Government Demands, Laws and Regulations”Footnote 99 rather than human rights impacts. These principles illustrate that for the GNI, threats to freedom of expression are framed as illegitimate government behavior, and its role is to assist companies with human rights–compliant conduct when confronted with, for example, an overly broad request for filtering or blocking of content.

While industry push-back against illegitimate government requests undoubtedly addresses a relevant human rights problem, it is not sufficient to comply with the responsibilities set out in the UNGP. Those responsibilities require companies to know their actual and potential human rights impacts, to prevent and mitigate abuses, and to address adverse impacts they are involved in. In other words, companies must carry out human rights due diligence across all operations and products. The process of identifying and addressing the human rights impact must include an assessment of all internal procedures and systems, as well as engagement with the users potentially affected by the company practices. It follows that for GNI members such as Yahoo, Facebook, and Google, it is not sufficient to focus on government requests and human rights–compliant practices in this realm. Rather, assessment is needed on the freedom of expression impacts that may flow from all company practices, including, for example, when the company enforces community standards or takes down content based on alleged copyright infringement.

Internet platforms such as Facebook and YouTube influence the boundaries of what users can say and view online via their terms of service. Enforcement of these terms of service must work effectively at a scale of millions of users, including in high-profile controversies such as the “Innocence of Muslims” video,Footnote 100 as well as in more routine cases where users report objectionable content. In practice, the terms are translated into specific definitions and guidelines that are operationalized by employees and contractors around the world, who “implement the speech jurisprudence”Footnote 101 by making decisions on which content to leave up or remove.Footnote 102 According to Google, for example, deciding on the limits of freedom of expression for a billion users is “a challenge we face many times every day.”Footnote 103 Yet, an intermediary’s terms of service and the means of enforcing those terms are not part of the GNI norms and standards.

A similar challenge is found in relation to privacy. The GNI Principles iterate that

the right to privacy should not be restricted by governments, except in narrowly defined circumstances based on internationally recognized laws and standards… . Participating companies will respect and protect the privacy rights of users when confronted with government demands, laws or regulations that compromise privacy in a manner inconsistent with internationally recognized laws and standards.Footnote 104

The corresponding section in the Implementation Guidelines addresses “Government Demands, Laws and Regulations” as well as “Data Collection.” The latter is concerned with risk analysis of the specific national jurisdiction in which the company operates.Footnote 105 In line with its counterpart on freedom of expression, the GNI Principles and the attached Implementation Guidelines focus merely on the negative human rights impact caused by external pressure from governments, whereas internal mechanisms related to data processing and exchange remain unchallenged.

This is unfortunate, given that the business model of online platforms, which is based on targeted advertising, is increasingly accused of promoting privacy violations. On Facebook, for example, advertisements are targeted to individual users’ interests, age, gender, location, and profile. This enables advertisers to select specific groups and target advertisements either on the Facebook website or on other websites using Facebook’s advertising services. This business model has caused a number of privacy-related controversies. Most recently, in 2015, a Belgian research study criticized Facebook’s data-processing practices and concluded, in relation to Facebook’s social media plug-ins, that it processes the personal data of its users as well as the data of all Internet users who come into contact with Facebook, without the necessary consent for “tracking and tracing” or consent for the use of cookies.Footnote 106 As a follow-up to the study, the Belgian Privacy Commissioner issued a set of recommendations to Facebook.Footnote 107 This is just one example of how Internet platforms can impact the privacy of their users due to their online business model rather than government pressure. Yet, these aspects of company practice in relation to privacy are not included in the GNI norms and standards.

In sum, several of the major Internet companies frame their core mission in terms of freedom of expression and engage in industry networks such as the GNI that are dedicated to protecting human rights norms and standards in the online domain. Yet, the effectiveness of the GNI to protect human rights is challenged by several factors. First, it is based on a voluntary commitment, with no binding obligations on companies. Second, it is largely occupied with limiting and safeguarding against undue government pressure on companies, whereas content regulation and user tracking and profiling are not covered, despite their potential human rights impact.

V Challenges to Human Rights Protection in a Privatized Online Domain

In this final section, I will discuss whether the frameworks that currently govern the activities of online platforms are sufficient to provide the standards and mechanisms needed to protect and respect human rights online, drawing on the challenges outlined in the previous section.

A first challenge relates to the circumstance that core civil and political rights (privacy, freedom to search for information, freedom to express opinion) are exercised within a commercial domain, with companies holding unprecedented power over the boundaries and conditions for exercising those rights. Arguably, some of the most widely used platforms and services may affect public and private life in a way traditionally reserved for public authorities, yet they are largely free from binding standards to protect freedom of expression and privacy. Whereas this governance gap may have a positive impact on rights and freedoms in a state-repressive context, it does not take away the challenges that this raises within democratic societies. Companies that have a substantial impact on the environment are increasingly subjected to national regulations for business conduct, yet similar attention has not been paid to online platforms. Scholarship is only now beginning to address the broader societal implications of private ownership of the online infrastructure of search, expression, and debate that results in the double logic of user empowerment and commodification of online activity.Footnote 108

Human rights law is state-centric in nature and holds no direct human rights obligations for private actors. The governance gap accompanying globalization was a core driver for the development of the UNGP, and therefore for asserting the corporate responsibility to respect human rights as a freestanding, universally applicable minimum standard of business conduct – one driven by global social expectation while at the same time based on international law.Footnote 109 Nonetheless, the soft law framework of the UNGP, however widely endorsed, remains voluntary by nature, as do industry initiatives such as the GNI.

Further, even these soft law frameworks have significant gaps. In 2016, the five GNI member companies were positively assessed by GNI-appointed assessors for compliance with GNI norms and standards.Footnote 110 There are, however, several shortcomings to this assessment process. First, it does not entail a comprehensive human rights impact assessment of all business practices as prescribed by the UNGP, but instead focuses more narrowly on the issues that the GNI members have chosen to include in their development of norms and standards. This means that push-back strategies against illegitimate government requests are the focus of assessment, whereas the impact of business processes concerned with taking down content that does not adhere to internally defined business standards is not considered. Second, the terms and conditions of the assessment process (including the selection of assessors) are carried out within the circuit of the GNI, providing the companies subject to review with influence on the baseline for this review.

Another human rights weakness in these soft law frameworks concerns the limited access to remedy mechanisms. As emphasized by the third pillar of the UNGP, states must take appropriate steps to ensure access to an effective remedy when business-related human rights abuses occur within their jurisdiction. Despite the impact that online platforms have on users’ rights of expression and privacy, limited channels exist for users to address potential or actual infringements of such rights.Footnote 111 In sum, given the impact that these companies potentially have on human rights in terms of scope and volume, the voluntary approach seems insufficient to provide the billions of Internet users with the level of protection they are entitled to according to international human rights law.

This brings us to the second challenge, namely, whether the state has a positive obligation to legislate the obligations of these companies. Does the character of major online platforms call upon states to provide human rights guidance and possible regulation of these actors? Until now, neither the United States nor Europe has taken up this challenge. In April 2016, the European Union concluded a four-year-long comprehensive data protection reform, including, among other things, increased focus on the practices of online platforms.Footnote 112 Yet while online platforms’ negative impact on privacy has received some attention, their impact on freedom of expression has not. As such, there is no national regulation to protect against the potential negative impact that a major Internet platform may have on freedom of expression. As previously mentioned, in the United States, the First Amendment and the public forum doctrine protect expressions in the public domain, but on the Internet, private companies in control of communicative platforms are free to decide the types of speech they support. This includes taking down or blocking and filtering expression that would otherwise be protected by the First Amendment. In consequence, expression is less protected in the online domain, despite the wide opportunities online platforms provide for new means of realizing freedom of expression. Likewise, in the United States, there is no general data protection regulation covering these private actors, and thus no clear boundaries for the companies’ handling of personal data.

However urgent, several factors indicate that a solution will not be forthcoming in this area any time soon. The transnational nature of online platforms makes it difficult for states to address their impact on freedom of expression or privacy domestically. Moreover, up till now, the United States and European states have been unable to agree on the scope of freedom of expression, for example concerning protected speech, and they have lacked a common standard for data protection. Whereas the European approach is geared toward both negative and positive state obligations in the area of freedom of expression and privacy (e.g., imposing regulations on private actors), the US approach has focused on the negative state obligation to avoid interference. While the issues raised have received some scholarly attention, they have not surfaced as prominent policy issues in either Washington or Brussels. As such, it is not realistic to expect common US/EU policy for the major online platforms in the foreseeable future.

If European states were willing to invoke their positive state obligation in order to protect freedom of expression online, they would have to apply national standards for protected speech to the online domain. In consequence, Internet platforms would have to comply with a number of different standards for protected speech, depending on the location of their users. Although this would most likely cause controversy and resistance from the companies, it is in principle no different from the current situation, in which platforms adhere to different regimes for unlawful content depending on the national context in which they operate. In other words, while both Facebook and Google have processes for dealing with alleged unlawful content in a specific national jurisdiction, they might also have processes for ensuring that no content is taken down unless it satisfies the criteria set out in human rights law. Such a mechanism would ensure that the companies’ commitment to freedom of expression is operationalized not only in relation to government pressure, but also in relation to the day-to-day practices that govern their communities of users.

In conclusion, divergence in the US and European approaches to privacy and freedom of expression, as well as the complexity of defining legal responsibilities in the face of conflicting local laws, means that a concerted state effort in this field is unlikely. Yet authoritative human rights guidance for the major online platforms is urgently needed in order to clarify the scope of their responsibilities and, more importantly, to ensure that their impact on billions of users’ rights is mitigated and potential violations are remedied.

12 Technology, Self-Inflicted Vulnerability, and Human Rights

G. Alex Sinha
I Introduction

Since 2013, perhaps no human rights issue has received as much sustained attention as the right to privacy. That was the year the first Snowden revelations reached the public, detailing sophisticated, large-scale US government surveillance programs designed to capture or analyze incredibly large volumes of digital data. In the weeks and months that followed, media reports confirmed that the US government had, at various recent points, run programs designed to scan and harvest data contained in e-mails, track Internet browsing activity, collect data from cell phones, collect digital contact lists (including e-mail and instant messaging contacts), and collect photographs of Internet users all over the world.Footnote 1 Other governments have been revealed to engage in similar practices.Footnote 2

Targeting the use of digital technologies is an obviously fruitful approach for state surveillance programs. By the middle of 2016, one estimate placed worldwide Internet use at more than 3.5 billion people.Footnote 3 Cell phone use is even more widespread, with recent reports suggesting the world is approaching five billion mobile users.Footnote 4 Significant numbers of people also use other technologies conducive to tracking, such as E-ZPass or Global Positioning System (GPS) devices. Those numbers are likely to increase in the coming years, which is particularly significant because of the way in which digital technologies lend themselves to insecure use and mass surveillance.

The ongoing global conversation about the legality of surveillance practices has focused on a number of dimensions of the human right to privacy, but there has been little serious discussion of a major factor in the expansion of the insecure use of digital technologies: the user. A significant portion of the information collected by surveillance (or otherwise made vulnerable to unintended recipients) is exposed voluntarily, sometimes deliberately or knowingly, or with unjustified ignorance of the risks of transmitting it in a particular manner. This chapter argues that, as human rights bodies, governments, and advocacy groups seek to understand the protections provided by the human right to privacy, it is also essential to clarify the conditions (if any) under which a person may waive those protections. The purpose of this chapter is therefore to help launch a conversation about waiving the human right to privacy and the role of the state in fostering the ability of individuals to make better choices in protecting their privacy.Footnote 5

II Individual Choice and the Human Right to Privacy

The Snowden revelations have triggered increased engagement on the right to privacy among civil society organizations,Footnote 6 multiple votes within the United Nations General Assembly,Footnote 7 research by the United Nations High Commissioner for Human Rights,Footnote 8 and the establishment of a new special rapporteur on privacy by the Human Rights Council.Footnote 9 Pressure is also building on the Human Rights Committee to update its interpretation of the right to privacy under the International Covenant on Civil and Political Rights (ICCPR), primarily to account for changes in circumstances and technological developments that render its previous interpretation from the 1980s practically obsolete.Footnote 10

This flurry of activity has largely left aside the relevance of individual users and the choices they make in the storage and transmission of their private information. Consider the state of the debate about US human rights obligations related to privacy – obligations that have occupied center stage since media outlets began publishing the Snowden revelations. Although a number of international agreements address the right to privacy,Footnote 11 a primary source of human rights obligations for the United States is the ICCPR, to which the United States has been a party since 1992. Article 17 of the ICCPR stipulates that “[n]o one shall be subjected to arbitrary or unlawful interference with his privacy, family, home or correspondence … [and e]veryone has the right to the protection of the law against such interference.”Footnote 12 Additional articles in the covenant inform the scope and rigidity of individual rights, such as by addressing the geographic range of state duties under the covenant or the conditions under which a right might be limited or outweighed by other considerations (such as protecting national security).

The ICCPR does not explicitly address the role of individual choice in connection with the right to privacy,Footnote 13 which means choice has not factored much into a debate that has largely followed the text of the covenant. For example, a significant dispute has arisen about the meaning of the covenant’s ban on “arbitrary” and “unlawful” interference with protected privacy interests.Footnote 14 Multiple UN rights experts have recently concluded that non-arbitrariness requires states, inter alia, to ensure the necessity of interferences with the right to privacy and the proportionality of their invasive practices.Footnote 15 Such requirements are intended to ensure the continued relevance of the human right to privacy in the digital era and would, in theory, provide a check on large-scale surveillance of the sort revealed by Snowden.Footnote 16 Thus far, however, the United States has rejected requirements like necessity and proportionality, arguing that those standards do not necessarily follow from a ban on arbitrary and unlawful interference.Footnote 17 Instead, the United States insists that its programs need only be (and consistently are) “reasonable” because they are authorized by law and not arbitrary.Footnote 18

Another dispute concerns the geographic scope of state duties under the covenant. Article 2(1) provides that “[e]ach State Party to the present Covenant undertakes to respect and to ensure to all individuals within its territory and subject to its jurisdiction the rights recognized in the present Covenant.”Footnote 19 The United States typically interprets the phrase “within its territory and subject to its jurisdiction” conjunctively, such that it only accepts duties under the covenant toward people of whom both modifiers are true.Footnote 20 Key human rights bodies, including the Human Rights Committee (the UN body tasked with interpreting the ICCPR), have rejected that reading and interpret the phrase disjunctively.Footnote 21 This disagreement has garnered increased attention as a result of US surveillance revelations, because the narrower position turns would-be beneficiaries of human rights protection into unprotected targets of surveillance.

Yet another dimension of the ongoing debate about the human right to privacy concerns the limitations clauses built into the ICCPR. The United States has emphasized that it takes a broad reading of those limitations – especially the national security limitation articulated in, among other places, Articles 19, 21, and 22.Footnote 22 The US government routinely cites national security considerations to justify practices of concern to human rights bodies, including surveillance.Footnote 23 The implications of that approach are far-reaching in light of the ongoing War on Terror, which lacks any obvious or imminent endpoint.

Overall, the contours of this debate are unsurprising; the language of the covenant is a natural focal point for states and human rights bodies alike, and their disagreements, in turn, frame the contributions of interested advocacy groups. But the issue of personal choice casts a shadow over all such textual analysis. It is surely uncontroversial that one can, at least sometimes, waive privacy protection for particular pieces of information by exposing them to collection.Footnote 24 It should also be uncontroversial that some digital information, even if it can be obtained by intelligence agencies or hackers, remains protected by the human right to privacy. Yet through the insecure use of digital technologies, people increasingly expose enormous swaths of protected information with unclear levels of intentionality and culpability – even as the covenant remains silent on waiver generally and the ongoing debates about the human right to privacy fail to provide much clarity. The conditions under which one waives legal privacy protections are therefore both extremely important and extremely unclear.

Vulnerability can be chosen, such as when people share private information in public fora (like public websites). It can be recklessly or negligently assumed, such as when people undertake to store or transmit personal information in insecure ways (whether knowingly or because they lack a reasonable appreciation for the risk). It can arise through no fault of a user when it results from justifiable ignorance on the user’s part. And it can be imposed by circumstance in spite of a user’s best efforts, such as by the mere fact that surveillance authorities and hackers around the world typically have more power to harvest information than even the most committed individuals have to protect it. Any reasonable understanding of the waiver of the right to privacy must account for different notches on this spectrum.Footnote 25

For simplicity, we might assume that posting private information – say, political leanings and hobbies – on a public Facebook page constitutes some sort of waiver of privacy protection for that information, meaning that such information would fall outside the scope of the protections contained in Article 17. But what about intermediate cases that expose us to broader-than-intended intrusions, such as posting the same information on a semipublic Facebook page that is set to restrict access only to approved “friends”? Or sending sensitive information via unencrypted e-mail to a single recipient? Or running a sensitive Google search from a personal computer alone in one’s home? Or carrying a cell phone that could just as well connect to a stingray as a cell phone tower? Or attempting to send an encrypted message but accidentally sending the message unencrypted?

These examples underscore a complicating underlying factor: Even when we want to protect our privacy, we are often fundamentally incapable of employing digital technology securely, whether due to ignorance, lack of skill, or inherent limitations in the technology itself. Yet we constantly use such technology anyway. Consider the example of the United States, which in some ways is a particularly risky place for such a casual approach to technology. Not only does the United States aggressively gather as much data as it can through perhaps the most powerful surveillance apparatus in the world, but it also features some problematic legal precedents for privacy under domestic law.

A string of Supreme Court cases has established the legal principle that voluntary disclosure of information to third parties eliminates one’s expectation of privacy for that information, thereby defeating constitutional privacy protections that would require law enforcement to get a warrant for it.Footnote 26 In several cases decided between 1952 and 1971, the court consistently held that the Fourth Amendment prohibition on unreasonable search and seizure does not apply to the contents of a person’s utterances that are voluntarily communicated to a government agent or informant.Footnote 27 A second series of cases, decided between 1973 and 1980, extended that rule to business records that are provided to a third party. For example, in United States v. Miller, the Supreme Court ruled that there is no reasonable expectation of privacy in checks and deposit slips provided to banks, as those are “negotiable instruments” rather than “confidential communications” and the data they contain are “voluntarily conveyed” to the banks.Footnote 28 In Smith v. Maryland, the Court reinforced its earlier holdings as applied to records of phone calls placed by a criminal suspect. The Court held that, because dialing numbers from one’s phone involves providing those numbers to the phone company, the police can collect records of those calls, without a warrant, through the use of a pen register installed on the telephone company’s (rather than the suspect’s) property.Footnote 29

In one sense, each of these rulings was quite narrow. The first cluster addressed a criminal defendant’s communications with a government agent or informant, Miller concerned checks and deposit slips provided to a bank, and Smith addressed the right of a criminal suspect to assert Fourth Amendment protection for the numbers he dials from his home phone. Yet in all of these cases, the Court held that constitutional privacy protections under the Fourth Amendment to the US Constitution simply did not apply, at least in part because the parties asserting their rights had voluntarily disclosed the information in question to a third party. The cases are thus suggestive of a rule that extends to many other contexts.

As many have noted,Footnote 30 the underlying rule – sometimes referred to as the “third-party doctrine”Footnote 31 – has sweeping implications in the current era. Most people now turn over a significant and growing proportion of their private information to third-party service providers. E-mail providers like Google rather notoriously scan the text of our messages for key words so they can tailor their advertising to matters of interest to specific users. The specific websites we visit are recorded by Internet service providers (ISPs).Footnote 32 And, just like in Mr. Smith’s case, significant information about our phone activity – now including text messages as well as calls – passes through the hands of our phone service providers.

In fairness, there is a question as to whether the third-party doctrine would extend to the content of communications (rather than metadata) passing through the hands of a service provider.Footnote 33 The phone numbers in Smith are considered metadata, and it is debatable whether the monetary values on checks and deposit slips from Miller count as content.Footnote 34 The first series of cases discussed above concerned content, but that content was conveyed (sometimes unknowingly) to a government agent or informant rather than through a third-party service provider. One might attempt to distinguish these cases, as Orin Kerr has done, carving out Fourth Amendment protection for content but not metadata.Footnote 35 Others, like Greg Nojeim, argue that metadata can be sensitive enough to warrant Fourth Amendment protection on its own, even if precedent does not necessarily support that view.Footnote 36 There is no clear consensus on the matter in US courts, although the holding in Smith could arguably reach content, because the court does not explicitly distinguish between content and metadata: ”[T]his Court consistently has held that a person has no legitimate expectation of privacy in information he voluntarily turns over to third parties.”Footnote 37 And although Justice Sonia Sotomayor has questioned the third-party doctrine precisely for its implications at a time when so much of our daily activity involves third parties,Footnote 38 it remains unclear how US courts will continue to apply the doctrine.

In light of what appear to be tangible legal risks, not to mention the practical likelihood that various state intelligence agencies and others are likely to obtain nontrivial proportions of our digital data, it is therefore worth inquiring how deliberately and culpably people appear to store and transmit so much sensitive information insecurely.

III The Voluntariness of Insecure Use of Technology

It is impossible to deny that cell phones and the Internet are convenient for managing private matters, and many of us sometimes elect to use them when we know (or should know) that it is insecure to do so. But in light of the possible implications for waiving the human right to privacy, it is important to recognize that the use of digital technologies is often less voluntary than might appear at first glance. While there is obviously some element of choice in when and how people adopt technologies, there is a large measure of compulsion as well. Many employers assign e-mail addresses to employees, the use of which is more or less mandatory. Certain employers also issue smartphones to remain in better contact with their employees, or to enable employees to remain in better contact with clients. Universities often require students to use online systems to access course materials and other important information. Indeed, online research has become all but unavoidable for a number of jobs and scholarly endeavors. For many people, unplugging is not even a meaningful possibility.

There are also broader practical concerns that raise questions about the voluntariness of much Internet and phone use. The obvious utility of cell phones can make it prohibitive to eschew them. For example, cell phones are unparalleled as a parenting tool for remaining in contact with one’s children. Even for those who can do without a cell phone, the choice to do so can impose substantial costs. Moreover, increasing cell phone use has correlated with a dramatic decline in the installation and maintenance of pay phones, making it ever more impractical to hold out for privacy reasons.Footnote 39

Similarly, refusing to use the Internet may ultimately foreclose access to a range of substantive goods.Footnote 40 Remaining in touch with others is just one example. For those of limited means, long-distance communication without the Internet is especially difficult, as phone calls can be expensive and postal mail is slow. But instant messaging and voice-over-IP services (like Skype) permit free communication with other users all over the world. Similarly, staying up to date on current events, managing one’s finances, and planning travel – not to mention applying for food assistance or other government benefits – are all increasingly difficult without the Internet, as companies and governments have scaled back support for non-Internet-based interaction. Resisting new technologies can be so costly that it hardly resembles a genuine choice.Footnote 41

Moreover, there is good reason to conclude that many users of digital technologies simply fail to appreciate their vulnerability, rather than knowingly assuming that vulnerability.Footnote 42 People routinely surrender sensitive or damaging personal information to third parties that cannot safeguard it properly, as evidenced (for example) by recent hacks of Sony, Target, Yahoo, and the Ashley Madison website.Footnote 43 The Ashley Madison hack, for instance, exposed the identities of many people who had used the site to set up extramarital affairs. Some of those people were seriously harmed by the revelation, including employees of the US federal government who used their official work e-mail addresses to set up accounts.Footnote 44 It is implausible that most of these people were indifferent to publication of their private information, and much more likely that they did not fully appreciate their vulnerability.

Further, many of those who recognize the privacy threats posed to electronic communication have limited resources and expertise to address the problem. For those who are not adroit with technology, it can be intimidating to pick among the options and get comfortable using the proper tools as a matter of course. Unsurprisingly, there are also conflicting opinions about the efficacy of various measures, and those who lack a technical background are not well placed to make a confident choice.

Security can also be both expensive and slow; at minimum, as one particularly tech-savvy individual put it, proper security measures can impose a significant tax on one’s time.Footnote 45 Moreover, even those measures that are inexpensive and simple to use may be ineffective without buy-in from one’s correspondents. For example, free, easy-to-use software is available for encrypting chats, text messages, and phone calls,Footnote 46 but encryption is pointless unless all parties to a communication are willing to use it. And the typical user also has no power whatsoever to improve the security of some of the systems many of us are obligated to use, such as e-mail provided by an employer.

Lacking the time, knowledge, power, and sometimes money necessary to invest heavily in data security, the average person finds himself trapped between two imperfect options: risk the insecurity that comes with the use of ubiquitous and convenient technologies, or forego some of the most efficient tools available for conducting personal or professional business. And, often enough, the world – whether through our employers, our schools, or the demands of our personal lives – picks the first option for us.

Even parties with significant resources strain to maintain security – not necessarily because they are careless or sloppy, but rather because digital data can be vulnerable to collection by any number of actors, and because it can be exceedingly difficult to understand the nature of those vulnerabilities and to institute adequate protections. The US intelligence community, for instance, has failed repeatedly at protecting highly classified information from both whistleblowers and hackers.Footnote 47 It is not surprising, therefore, that sophisticated individuals struggle as well. I previously did research on surveillance for Human Rights Watch (HRW) and the American Civil Liberties Union (ACLU), for which I interviewed (among other people) nearly fifty journalists covering intelligence, national security, and law enforcement.Footnote 48 Those journalists have an overriding interest in protecting both the identities of their sources and the contents of their conversations. First and foremost, that interest arises from a general feeling of obligation to shield the identities of those who provide them with information. As a practical matter, the journalists also recognize that failure to protect sources effectively could compromise their ability to develop other sources in the future.

Many of the people I spoke with worked for major outlets, like The New York Times, The Wall Street Journal, The Washington Post, NPR, and ABC News.Footnote 49 Most also had years, even decades, of experience reporting on sensitive subjects, and a number had already won Pulitzer Prizes. As journalists go, therefore, the group I interviewed was elite in both their level of skill and their access to institutional support for the proper tools of the trade. All of that notwithstanding, these journalists consistently and vividly relayed to me significant ongoing challenges in using digital technologies securely.Footnote 50 Nearly all of them told me that the most secure method of doing their work involved avoiding technology as much as possible – meeting sources face-to-face while leaving cell phones at the office, or saving notes in hard copy rather than electronically.

Yet “going dark” by avoiding electronic devices significantly impeded their work, could still draw scrutiny, and was sometimes impossible.Footnote 51 To the extent that use of digital technologies is unavoidable, many of the journalists reported upgrading their security. For example, a number described learning how to use Tor, how to encrypt e-mails, chats, and texts, and how to purchase and set up air-gapped computers.Footnote 52 Some benefitted from data security training run by their outlets. Others with less institutional support described improvising new security measures, sometimes attempting to hide digital trails using methods that were in fact ineffective. One described sharing an e-mail account with a source and exchanging messages via saved, but unsent, drafts. That was the same technique David Petraeus reportedly used, unsuccessfully, in attempting to communicate secretly with his mistress.Footnote 53 Whether the journalists benefitted from professional security training or not, they were uniformly skeptical that it was even possible to be entirely secure in one’s use of digital technologies. Interviewees commonly expressed the feeling that, when facing off against an adversary as powerful as the National Security Agency, the best one could hope to do is “raise the cost of surveillance.”Footnote 54

I found similar results from speaking to attorneys,Footnote 55 whose interests in digital security stem from, among other things, their professional responsibility to safeguard confidential client information.Footnote 56 I interviewed more than forty attorneys, most of them working on matters of potential interest to the US government, such as criminal defense in the national security context. Just as the journalists expressed significant concerns about managing their relationships with sources, the attorneys largely worried about managing their relationships with clients. Some found that merely warning their clients against using phones and e-mail made the clients increasingly mistrustful and hampered their ability to develop a relationship. And, like the journalists, a number of the lawyers expressed the belief that speaking face-to-face is now essential, notwithstanding the costs and time constraints associated with doing so.

It is noteworthy that these journalists and attorneys – educated, fairly well-resourced people with an unusually high interest in protecting the privacy of some of their interactions – have to wrestle so mightily with the issue of data security. Indeed, merely planning this research required a surprising amount of thought within HRW and the ACLU. I needed a way to reach out to my subjects electronically concerning sensitive matters, and a way to convey to them my competence in protecting any data they might provide. That was a difficult goal to achieve, especially because so much of security turns on the measures taken by one’s correspondents. Like many of my research subjects, I therefore had to work with the institutions backing me to develop security protocols that were affordable and practical under the circumstances. Notwithstanding the assistance I had, the process involved some measure of trial and at least one embarrassing security error on my part. In short, true digital security is incredibly difficult or perhaps even impossible to attain – a point that cannot be lost in attempting to understand the relationship between the use of digital technologies and the waiver of the human right to privacy.

IV Drawing Some Conclusions

A modern understanding of the human right to privacy must contend with these questions, and the purpose of this chapter is to highlight the need for an extended conversation about the issue of waiver, especially among human rights authorities.Footnote 57 This section offers some preliminary conclusions about the conditions under which a technology user’s actions might constitute a waiver of his or her privacy protections under Article 17 of the ICCPR, and the duties of the governments that are parties to the ICCPR to support the choices of individuals who take measures to secure their digital data. Per the Vienna Convention on the Law of Treaties, “[a] treaty shall be interpreted in good faith in accordance with the ordinary meaning to be given to the terms of the treaty in their context and in the light of its object and purpose.”Footnote 58 When that approach alone “[l]eaves the [treaty’s] meaning ambiguous or obscure,” or “[l]eads to a result which is manifestly absurd or unreasonable,” then “[r]ecourse may be had to supplementary means of interpretation, including the preparatory work of the treaty and the circumstances of its conclusion.”Footnote 59 Application of these principles often permits and may at times require that a state’s treaty obligations evolve with changing circumstances. As Eirik Bjorge has recently put it, “The wording [of a treaty] is important because it may lead us to ascertaining the intention of the parties, not because it is somehow an end in and of itself.”Footnote 60 An evolutionary reading of a treaty may therefore “be required by good faith.”Footnote 61 For the reasons laid out below, the right to privacy reveals that the ICCPR is precisely the sort of treaty that requires an evolutionary reading.

A Waiver Should Be Understood Narrowly under the ICCPR

The text of Article 17 is the starting point in determining the conditions for waiver. As noted above, the covenant prohibits “arbitrary or unlawful interference” with four distinct but potentially overlapping items: privacy, family, home, and correspondence. Challenging questions arise immediately simply from the relationships among these categories. For example, privacy is distinctively diffuse relative to the other items; the concept of privacy neither encompasses everything about one’s correspondence, family, and home, nor is it fully exhausted by those three domains. The language of Article 17 therefore immediately invites the possibility that certain information will warrant protection under more than one category. That possibility is even more complex than it appears at first glance because the protections under the different categories may not be symmetrical.

For example, whether state interference with a particular piece of information intrudes upon my privacy would seem to depend, at least in part, on my attitude toward that information or my previous choices about whether to publicize it. In other words, if I publicize certain information widely enough, then interference with that information is not, by definition, an interference with my privacy. By contrast, e-mail, text messages, and instant messages are almost certainly “correspondence” under the covenant, irrespective of whether I intend them to be private, and thus interference with these items could trigger Article 17 even if I had shared those e-mails, texts, or messages with many correspondents. By listing “correspondence” as its own protected category – separately from “privacy” – interference with correspondence could require an assessment of non-arbitrariness and lawfulness by default.Footnote 62

This asymmetry may lead to irregular outcomes when assessing whether an individual has waived the protections of Article 17. Suppose a state takes the position that the interception of an unencrypted e-mail sent by me does not count as an interference with my privacy because the lack of encryption renders the e-mail non-private. Even so, the e-mail could still be protected under Article 17 as correspondence. By contrast, an analogous transaction that does not fall under the secondary protection of the “correspondence,” “home,” or “family” categories – say, storing certain work information unencrypted in the cloud – might not qualify for Article 17 protection at all. Those results may seem counterintuitive, raising further questions about whether “privacy” as a category should be understood as a catchall designed only to extend the protections of Article 17 to sensitive domains other than one’s correspondence, family, and home, rather than offering a separate layer of support that overlaps with those enumerated categories. In any event, how to untangle the relationships among these categories is exactly the sort of question this chapter argues is in need of an authoritative answer.

Notwithstanding such questions, however, and based on the covenant’s object and purpose, it appears that covenant protections should generally be rounded up rather than down. The main objective of the covenant is described in broad terms in the preamble, which offers a sweeping account of the value of the rights it protects. Under the terms of the covenant, the enumerated rights “derive from the inherent dignity of the human person”; people must be able to enjoy those rights to achieve “the ideal of free human beings … [living] in freedom from fear and want.”Footnote 63 Although it can be tempting to link the spread of digital technologies and the rise of social media with a devaluation of privacy and conclude that attitudes toward privacy have shifted with technological advancement, that is neither obviously true nor especially relevant.Footnote 64 The object and purpose of the covenant would be undermined by permitting the casual or unintentional waiver of a core right simply because many people use digital technologies insecurely. Indeed, when only a minuscule, elite subset of the population is actually capable of safely maneuvering around technological vulnerabilities – and, even then, imperfectly – the appropriate conclusion is not that everyone else has chosen insecurity, but rather that security is too difficult to attain. Conditioning enjoyment of the right to privacy under the ICCPR on the secure use of digital technologies would render the right meaningless.

Among the implications of this conclusion is that strict application of the US third-party doctrine is likely incompatible with the ICCPR. Digital technologies nearly always involve the provision of information (whether content or metadata, for those who accept the distinction) to a third party. Were any disclosure to a third party enough to eliminate a user’s privacy interest, users would lose privacy protections for nearly all of the information they store or transmit in digital form. Even if the doctrine only applied to metadata, it would be unacceptably broad under the covenant, especially because metadata can be as revealing as content, but is more poorly understood by the public at large (and therefore may be less voluntarily shared).

In the recent Supreme Court case where Justice Sotomayor questioned the wisdom of the third-party doctrine, Justice Samuel Alito contemplated the possible effects of new technology on privacy rights. He wrote:

Dramatic technological change may lead to periods in which popular expectations are in flux and may ultimately produce significant changes in popular attitudes. New technology may provide increased convenience or security at the expense of privacy, and many people may find the tradeoff worthwhile. And even if the public does not welcome the diminution of privacy that new technology entails, they may eventually reconcile themselves to this development as inevitable.Footnote 65

Depressing as these comments may be for privacy advocates, they are at least comprehensible in the context of a system that accepts the third-party doctrine. But that doctrine is questionable in the digital age, and it may never have taken root under the present circumstances. Many people do maintain some sort of subjective expectation of privacy in information they share with a third party. One might properly comprehend that an e-mail provider has access to one’s e-mail content without also believing one’s e-mails could just as well have been sent to other third parties, such as foreign governments searching for intelligence. The same is true for digital banking activity or text messages or any other manner of nonpublic transaction that is only possible with the assistance of a third party. In the abstract, and in the digital age, that expectation is not objectively unreasonable either – at least not obviously so.

Moreover, the third-party doctrine plays out differently under the covenant as compared to the US Constitution. Even beyond the wrinkle identified above with respect to the four types of interests protected by Article 17, there are a number of relevant differences between the rights to privacy guaranteed, respectively, by the covenant and the Fourth Amendment. For one, Article 17 makes no explicit reference to reasonableness or subjective expectations, unlike the Fourth Amendment to the Constitution, which bans, inter alia, “unreasonable searches and seizures.” Article 17 also applies more broadly than the Fourth Amendment; whereas the Fourth Amendment specifically regulates US government action, Article 17 bans a variety of privacy interferences by government actors and also obliges governments to protect rights holders against interferences from private actors. Incorporating recognition of these points into an applicable waiver standard is essential to ensuring that the protections of Article 17 keep pace with technological change, thereby staying true to the object and purpose of the covenant.

B States Should Support – and Certainly Should Not Interfere with – Active Steps Taken by Individuals to Protect Their Data

Under the ICCPR, states are bound both “to respect and [to] ensure” the rights in the covenant.Footnote 66 It has become common to describe those obligations by reference to the “respect, protect, and fulfill” framework, which in the context of privacy essentially captures the idea that states must avoid actively infringing on privacy rights, protect those rights from infringement by others, and take positive steps to support individual realization of privacy rights.Footnote 67 Recent discussions of privacy tend to focus on the negative obligations of states – the obligation not to violate privacy by engaging in improper surveillance, for example. But the positive obligations of states must remain part of the conversation as well.

By analogy, consider the human right to health, which is protected by the International Covenant on Economic, Social and Cultural Rights (ICESCR).Footnote 68 The ICESCR guarantees “the right of everyone to the enjoyment of the highest attainable standard of physical and mental health.”Footnote 69 Properly understood, the right is not an entitlement to be healthy, but rather a right to have the state take adequate steps to facilitate the health of its population.Footnote 70 A state’s full compliance with its obligations to ensure the right to health would not prevent members of its population from making poor health choices. Nevertheless, parties to the ICESCR are obligated to undertake measures to promote the health of their people.Footnote 71 Those measures might include reasonable efforts by the state to guarantee access to key resources for promoting health, such as by providing meaningful access to adequate medical care and nutrition.Footnote 72 They could also include the provision of information that facilitates informed health choices, such as nutrition labels on food packaging or disclosures about the side effects of various medical treatments.Footnote 73

Similarly, a state can properly ensure the right to privacy even as some of its citizens compromise their rights through poor choices about how to handle their own data.Footnote 74 The human right to privacy entitles one to protect certain information from invasion not just by one’s own government, but by foreign governments, corporations, hackers, identity thieves, and most anyone else. As noted above, governments that have ratified the ICCPR are obligated to protect individuals from such other actors who might intrude on the right, and to facilitate the efforts of individuals who seek out tools to secure their own information.

This obligation to ensure the right to privacy has several implications. It means governments should work with tech companies to repair known software flaws rather than secretly hoarding those exploits and allowing their populations to be rendered vulnerable. It means governments should seek to prevent or discourage the hacking of personal information rather than tolerating or encouraging those hacks. It means governments should offer resources to educate the public on good technological hygiene – for example, managing passwords for their online accounts or securing their mobile devices – rather than making it easier for companies to collect and sell their digital information, such as their web browsing histories. And it means governments should support the development and use of technologies that make it genuinely possible for individuals to secure their information, such as end-to-end encryption for messaging services. At the very least, it certainly means states may not actively prevent people from accessing and using reasonable security and encryption tools.

States seeking to clear the way for aggressive surveillance might prefer simply to push their obligations aside. Consider once more the example of the United States, which has publicly opposed the proposals of technology companies to provide encryption as standard for various services.Footnote 75 In particular, law enforcement officials like former FBI Director James Comey have strenuously resisted “end-to-end” encryption as a default for various forms of communication, pressuring companies that offer such encryption to alter their “business model.”Footnote 76 Similarly, former Director of National Intelligence James Clapper complained that the Snowden revelations accelerated personal use of encryption, which he claimed was “not a good thing.”Footnote 77 Even President Barack Obama publicly stated opposition to unbreakable encryption.Footnote 78 Moreover, as of 2011, the NSA had a policy of treating encrypted communications collected under one of its surveillance authorities as worthy of special scrutiny,Footnote 79 a point of concern for various privacy advocacy groups.Footnote 80 Although there were reports of serious debates within the Obama administration on encryption policy,Footnote 81 the public criticisms of encryption from prominent officials and the evidence that encrypted communications are viewed with suspicion by the government are discouraging. They are also legally relevant to the state’s duty to ensure the right to privacy, for they aim to limit the use, availability, or utility of tools that help individuals secure that right for themselves.

V Conclusion

This chapter advocates for a serious conversation about waiving the human right to privacy. Many other elements of the right are appropriately on the table already, being dissected, debated, reinterpreted, and applied to novel circumstances. All essential questions about the right to privacy should be folded into that discussion. Privacy issues will only grow more significant as digital technologies and the surveillance programs that track them become more sophisticated and ubiquitous. It is important that we get these issues right, and now is the time to ensure that we do.

It bears emphasizing that a universally accepted standard for waiving the human right to privacy may prove to be elusive, especially given that states like the United States may contribute to the debate by drawing on excessively broad standards from their own domestic legal precedents. Moreover, an acceptable standard may prove to be challenging to implement in any event, and its creation would in no way diminish the importance of questions already being addressed, such as the definitions of the terms in Article 17 or the proper limitations on the right. Nevertheless, waiver has broad implications for the legality of common state practices; the persistence of questions about its application only sows doubt where clarity is essential.

13 The Future of Human Rights Technology A Practitioner’s View

Enrique Piracés
I Introduction

Technology has been extraordinarily effective in reducing distances between people and places, but it has created an increasing distance between the present and the future. The rates of new product introduction and adoption are speeding up. It took forty-six years for electricity to reach 25 percent of the US population. The same milestone took thirty-five years for the telephone and only seven for the Internet. For most of us, it is increasingly difficult to understand or anticipate long-term technological trends. It is common, especially in the context of human rights practice, that such inability stokes fears of a dystopian future in which ordinary people, especially those already marginalized or disenfranchised, become subjugated by technology rather than benefiting from it. This chapter is both an attempt to help practitioners cope with new technologies and a proposal to incorporate solidarity as the driving force for technology transfer.

It has become cliché to say that technology and its impact on society advance at a rapid pace. It is also commonplace to say that societies and legal frameworks have a hard time adapting to technology’s pace and the behavioral changes it demands. But adaptation is a valuable goal, because there is no livable future without it. The human rights movement has taken note and, both systematically and spontaneously, looked for ways to adapt to the transformative era of the information society. Today, human rights campaigns rely heavily on social media and e-mail. The presentation of research results in courts, political offices, and public spaces commonly incorporates data visualization. Fact-finding practices often include the use of remote sensing and open source intelligence. Further, human rights research increasingly relies on computational analysis. Encrypted communications, and the tools and services that provide them, are now considered fundamental to the safety of human rights practitioners and their partners in the community. These are signs that, as the contributors to this volume remind us, the future of human rights will be intertwined with the advancement of technology.

The pace of technological change is unlikely to slow, and its relevance for human rights practice is unlikely to diminish. There is a valuable body of work, created over the past few decades, that focuses attention on the impact of technology on human rights. The lessons that we can extract from that literature will enrich our design for the future as well as our ability to evaluate the present.Footnote 1 Yet, as Molly Land and Jay Aronson point out in Chapter 1, the field of human rights technology is significantly undertheorized. I would add that the relationship between practice and theory has garnered even less attention. The contributors to this volume have gone a long way to redressing the first issue, especially with respect to human rights law. If we are to solve the second challenge, however, practitioners must help frame the debate in this interdisciplinary field. Doing so is essential to the advancement of effective human rights practice.

II Where Does the Future Begin?

Over the past ten years, the notion of human rights technology as an area of practice has garnered attention across disciplines. The growing use of the term “human rights technology” signals the interest of technical, scientific, and practitioner communities in advancing it as a field of practice. An important example, and one of the likely origins of this multidisciplinary interest, occurred in 2009, when the Human Rights Center at the University of California, Berkeley called for “leading thinkers, civil society members, activists, programmers, and entrepreneurs to imagine, discover, share, solve, connect, and act together.” This invitation materialized as an international conference, “The Soul of the New Machine: Human Rights, Technology & New Media,”Footnote 2 held in May 2009, and a follow-up conference, “Advancing the New Machine: A Conference on Human Rights and Technology,”Footnote 3 held in 2011, both in Berkeley. A diverse mix of academics, practitioners, and technologists attended those events, which launched a constructive debate about the uses of technology for human rights practice.

Since then, a growing number of efforts to create dialogue, promote debate, and engage technologists with rights defenders have emerged across the globe. Strategic donors to the human rights movement, like the MacArthur Foundation, the Ford Foundation, the Oak Foundation, Humanity United, and the Open Society Foundations, amplified these efforts. These foundations adapted their portfolios to help create the human rights technology field. Governments have also played a role, as can be seen in the programming of the Bureau of Democracy, Human Rights, and Labor at the US State Department,Footnote 4 the Open Technology FundFootnote 5 of Radio Free Asia (an initiative of the US Broadcasting Board of Governors), and the Swedish International Development Agency.Footnote 6

By now, there are dozens of international, regional, and national conferences and workshops each year that include debates on the use of technology for human rights.Footnote 7 Many organizations, like Benetech, HURIDOCS, and eQualit.ie, have carved a niche providing specialized technology and support to human rights practitioners. The growing interest can also be seen in the appearance of specialized and globally distributed communities of practice around issues of technology and human rights, such as the Internet Freedom Festival,Footnote 8 held yearly in Valencia, Spain, since 2015. This interest in technology has also reached traditional international actors like Amnesty International and Human Rights Watch, which have pioneered specialized programs within their organizations to address their remote sensing, data analysis, and digital security needs.Footnote 9 These examples are evidence of the growing and vibrant ecosystem interested in applying technology to solve human rights problems.

In order to frame how we think about the future of this field, it is essential to be aware of our own geopolitical and cultural positions. Human rights technology has not escaped some of the persistent problems that have faced the broader human rights movement. The most obvious, perhaps, has been the tendency to consolidate power in the economic capitals of the twenty-first century, geographically removed from most human rights crises. This can be acutely felt in the realm of technology, where investment in infrastructure can be too costly for grassroots organizations in the Global South. Current models of technology transfer reflect a unidirectional relationship, where technology is largely decided, designed, and created far away from the majority of people who need it. As Dalindyebo Shabalala reminds us in Chapter 3, funding and enforcement mechanisms for providing access to technology remain a challenge for effective technology transfer in international cooperation for adaptation to climate change.

For human rights practice – understood as fact-finding, advocacy, and litigation toward accountability, transparency, and justice – the fundamental problems with technology transfer are not limited to funding, but also include decision-making and design. Most technology is designed in places like the United States and the United Kingdom for practitioners and activists in the Global South, but generally without their involvement or input. A concerning example of this can be seen in Google’s Jigsaw project. Previously known as Google Ideas, it was re-launched in 2016 with the goal of “investing in and building technology to expand access to information for the world’s most vulnerable populations.”Footnote 10 Although this project may have been created in part out of genuine and bona fide good intentions, it is in reality an example of the kind of power-consolidating technology transfer that could harm the development of a sustainable and fair human rights technology ecosystem. As the technology law and policy scholar Julia Powles argues, human development and human rights are too complex and too culturally diverse to be addressed by profit-driven companies acting on their own initiative.Footnote 11 More to the point, as Rikke Frank Jørgensen points out in Chapter 11, the debate on binding human rights obligations upon companies has been ongoing for more than two decades, and the private sector has continued to be largely resistant to human rights frameworks.

The effect of this type of model – in which technology is designed for, but not with, practitioners – is twofold. First, it makes it more likely that a given technological “solution” will address a false dilemma, because there is little consideration of the context in which a particular technology will be deployed, what it may be displacing, and what social or cultural practices it may be enhancing or altering. Understanding the cultural impact of technology transfer is paramount, as technology is by nature disruptive. It would be naive, and potentially detrimental to the advancement of human rights, to think that the effects can be controlled and isolated to a particular issue. Designing technology without the stakeholders at the table could also mean a lost opportunity to learn from other approaches to problem solving, thus limiting the types of solutions that can be imagined.

Second, this model can lead to investments that are unsustainable on the ground. The yearly budget for a software developer in the Global North may be equivalent, for example, to the annual budget of a small organization that provides direct support to hundreds of migrants at the border between Mexico and Guatemala. Should we create expensive technology in their name from our comfortable seats in London, New York, or Palo Alto? Or should we bring them to the table to design a sustainable solution that recognizes their agency and goals? Should we even rely on for-profit companies to tackle complex geopolitical and cultural issues of global significance? Or should we create an open and distributed ecosystem that acts in the public interest?

When we think of the future, we must keep the sustainability of the human rights movement front and center. We need to guard against technology transfer creating dependence, exporting inequalities, or promoting a paternalistic relation between technology providers and human rights practitioners. The current approach to technology is instead largely based on the model of international cooperation for development, which Shabalala shows in Chapter 3 to be deficient on many levels. While his analysis focuses on new frameworks for organizing technology transfer at the government level, I wish to focus on efforts within the human rights community itself. In human rights practice, we can create better conditions for technology to effectively advance accountability, transparency, and justice if we move away from a technocratic approach and embrace the idea of transnational solidarity. International aid, like charity, is based on an asymmetrical relationship between a party in need and another party with resources or knowledge to share.Footnote 12 Relationships of that nature are prone to creating clientelism, dependency, and unidirectional knowledge transfer. A core motivation of this chapter is to suggest a solidarity-based framework as an alternative approach to technology transfer. A first step in that direction is for practitioners to educate themselves about the technology that will be the subject of that transfer.

III What Is Human Rights Technology?

Human rights practitioners frequently work in under-resourced, high-pressure environments. They tend to use opportunistic and adaptive approaches to problem solving. Because of the financial constraints that most human rights practitioners face, few technologies have been developed specifically for human rights practice. Instead, practitioners have adapted the majority of tools they use in the field from existing technologies. There are a small number of exceptions, composed largely of software projects around information management or communications. This includes projects like MartusFootnote 13 and OpenEvsys,Footnote 14 which were created specifically for human rights documentation, and privacy-enhancing mobile apps like those created by the Guardian Project. It also includes projects like PGP encryption and the Tor Internet browser, which were created by forward-thinking individuals who understood very early on in the information era that privacy and anonymity were instrumental to human rights.

Beyond these examples, the vast majority of technologies used in human rights practice are based on creative or opportunistic adaptations of general-purpose technologies. Today, practitioners rely on WhatsApp and Telegram to communicate with their peers or the subjects of their work; WordPress or Drupal to promote their ideas; Dropbox or Google Drive to manage their files; Google Apps or G Suite to collaborate on documents; and Skype to engage in meetings and interviews.

A significant difference between the few examples of purpose-built human rights technology and the general-purpose technology adopted and adapted by practitioners is the nature of the software behind them. Those solutions that have been created for human rights-specific purposes are largely open source. This means that the developers made the code they used to build the technology publicly available for anyone to review and tinker with. The only requirement for those who make changes or additions to open source software is that they, in turn, allow others to freely use and modify their contributions.

The foundations and donors that support the human rights movement acted as positive agents of change in promoting the use of open source software. Nearly a decade ago, they began to request that the technology created with their support be designed as open and available to others. This is key for sustainability and replication, and quite likely allows donors to maximize the impact of their portfolios. This openness, especially if expanded beyond software, will be pivotal for the inclusion of Global South and grassroots organizations in the design, adoption, and evaluation of solutions that are tailored for them. Open source software is not necessarily cheaper to develop, but it is often available with few licensing and use restrictions. It also reduces dependency and promotes collaboration among distributed and culturally diverse communities.

An important consideration when thinking about technology is the fact that the same type of adaptation that human rights practitioners can make to advance accountability, transparency, and justice could be made by other actors – from governments and corporations to organized criminals and non-state actors. In that sense, most technologies could have dual or multiple uses, including for abuse and repression of human rights. For that reason, and as Lea Shaver concludes in Chapter 2, it is critical that human rights practitioners find avenues to exercise scrutiny and oversight over technological developments in order to minimize harm.

Finally, we must consider what type of technology we should be prepared to confront in the future. What most practitioners assume fits under “human rights technology” lies within the realm of information and communication technologies, or ICTs. But the uses of technology in the human rights context already go beyond this domain. Contemporary examples of this include the use of remote sensing by international organizations to find incidents of violenceFootnote 15 or cultural heritage destruction,Footnote 16 the growing interest in unmanned aerial vehicles (UAVs, or drones) to access unreachable areas,Footnote 17 and the use of DNA technology by forensic anthropologists to uncover evidence of mass atrocities.Footnote 18

IV What Technological Trends Could Shape the Future of Human Rights Practice?

Popular culture plays an important role in shaping the way that human rights practitioners think about technology. We tend to be very generic when discussing the effects of technology in society. For example, it is common to see contemporary issues framed as “the impact of social media” on relationships or “the effect of mobile technology” on the economy, rather than on how companies, governments, communities, and individuals have integrated technology into our lives and societies. Thinking of technology as an entity divorced from human action is an inadequate starting point for discussing the future of human rights technology. If we were to follow that line of abstraction, we would risk ending up with a teleological framing of technology that authors like Kevin Kelly have proposed.Footnote 19 For Kelly, there is a super-organism of technology, a “technium,” in the global interconnected system of technology that is “partly indigenous to the physics of technology itself.” To think of the future of human rights technology, we need to avoid that path. Humans have created technology, and humans have used technology to alter society. We should avoid giving agency to technology and remind ourselves constantly that technology is created by people and organizations with agendas. These are agendas that will impact us, and we should aim to influence them.

To effectively shape these agendas, practitioners need a better and more specific understanding of the trends that will shape the future of human rights technology. In digital security, for example, we can expect an expanded use of technology, including end-to-end encryption, a system of communication in which encryption ensures that only the intended recipient can read the message; multifactor authentication, a method of computer access control in which a user is granted access only after successfully presenting several separate pieces of evidence to an authentication mechanism; and zero-knowledge encryption, a process that prevents a service provider from knowing anything about the user data that it is storing or transmitting.

In issues related to research and fact-finding, we can expect an increased use of UAVs, or drones, resulting in an increased availability of aerial images for documentation of human rights and humanitarian situationsFootnote 20; an expanded use of remote sensing and satellite imagery, which has become less expensive and more available as more firms enter the market and satellite technology improvesFootnote 21; and an increased use of open source intelligence, knowledge produced from publicly available information that is collected, exploited, and disseminated in a timely manner to an appropriate audience for the purpose of addressing a specific investigative requirement.Footnote 22

In the case of advocacy, we are likely to see an expanded use of complex visualization to support the narrative of human rights accountability efforts. The work of SITU Research, an organization working in design, visualization, and spatial analysis to facilitate the analysis and presentation of evidence documenting the destruction of sites of cultural heritage in Timbuktu, Mali, is an excellent example. Created in collaboration with the International Criminal Court’s Office of the Prosecutor, SITU Research built a platform that combines geospatial information, historical satellite imagery, photographs, open source videos, and other forms of site documentation. The Office of the Prosecutor used SITU’s tool successfully at the trial proceedings at the International Criminal Court in 2016.Footnote 23 This work is part an emergent field called forensic architecture, first developed at Goldsmiths College, University of London.Footnote 24 It refers to “the practice of treating common elements of our built environment as entry points through which to interrogate the present.”Footnote 25

The continued development of areas and projects like these will also be accompanied by new efforts in areas where technological trends are moving rapidly. While not exclusive, concepts like artificial intelligence, blockchain, sensors, open source hardware, and the Internet of Things reflect areas that are likely to offer fertile ground for the development of human rights technology and applications.

A Artificial Intelligence

Perhaps nothing embodies our fascination with and fear of technology better than artificial intelligence (AI). There are countless images in popular culture that evidence this, and while the reality is different than the anthropomorphic version of what we see on the big screen, AI is no less fascinating in reality.

AI is premised on the notion that “every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.”Footnote 26 Scientists have been working to make this dream a reality for several decades, but a critical milestone, the equivalent of the “man-on-the-moon moment,” happened in late 2015 when AlphaGo, a computer program developed by Deep Mind, a UK company recently acquired by Google, was able to defeat the best human player in the world at the ancient game of Go.Footnote 27 The game of Go, which was invented in China thousands of years ago, has a number of possible legal moves larger than the number of atoms in the observable universe. It is this complexity that made it a sizable test for artificial intelligence.

Generally speaking, AI is divided into weak AI and strong AI. Most artificial intelligence applications so far are considered either an expert system (ES) or a knowledge-based system (KBS), which means that they rely on an existing model or corpus of knowledge. This is, in a way, the application of existing knowledge to assess the best answer to a question or problem. This form of AI is generally referred to as “weak AI” because it requires a priori knowledge to arrive at the answer to a question. “Strong AI,” on the other hand, generally refers to the ability of a machine to perform “general intelligent action,” which is why it is also referred to as artificial general intelligence. In the case of the AlphaGo scenario, this meant that instead of evaluating all possible moves to calculate all possible outcomes like an ES or KBS would do, AlphaGo thought and made decisions like a human. The extraordinary achievement of AlphaGo is that it is not an expert system, but rather relies on artificial general intelligence. In other words, it learned to play Go rather than being fed many possibilities and choosing the one that best fit a particular scenario. This is generally accepted as evidence that AI has reached a tipping point much sooner that most scientists thought it would.

How can all this be of use for human rights practice? Can a machine teach itself to solve human rights problems? Will this be an opportunity or a challenge for human rights practice? In thinking of the future, I would argue that it is more likely that human rights practice will first benefit from advances in specific areas of AI research like machine learning, computer vision, and natural language processing, not in automated decision-making. These advances will improve the ability of human rights researchers to discover, translate, and analyze relevant information.

To get a sense of what may be possible, we can look at some recent experimental uses of AI for human rights issues. Researchers at the University of Sheffield and the University of Pennsylvania have used AI to develop a method for accurately predicting the results of judicial decisions of the European Court of Human Rights. The research team identified 584 cases relating to three articles of the European Convention on Human Rights: Article 3, concerning torture and inhuman and degrading treatment; Article 6, which protects the right to a fair trial; and Article 8, on the right to respect for a private and family life. After running their machine learning algorithm against this dataset to find patterns in the text, the team was able to predict the verdicts at an accuracy of 79 percent. What this suggests is that AI could be used to build predictive models to discover patterns in judicial decisions. This approach could help increase the success and effectiveness of litigation in defense of human rights by assisting advocates and lawyers in planning their litigation strategy.

Another example of the potential use of AI to advance human rights practice can be found in the work of the Center for Human Rights Science (CHRS) at Carnegie Mellon University.Footnote 28 After hearing of the challenges that human rights organizations were facing in analyzing and verifying the large volume of online videos regarding human rights abuses, researchers at the CHRS began to experiment with AI applications to solve these problems. With the goal of creating efficient and manageable workflows for human rights practitioners, they have created computer vision and machine learning methods to rapidly process and analyze large amounts of video. Their tools help human rights practitioners detect audio like explosions, gunshots, or screaming in video collections; detect and count the number of people in a given frame of a video; aid in geolocation of a video; and synchronize multiple videos taken by different sources at the same time and place to create a composite view of an incident.

But perhaps the most sophisticated use of AI applied to human rights that we can find is in the center’s Event Labeling through Analytic Media Processing (E-LAMP) system.Footnote 29 E-LAMP is a machine learning and computer vision–based video analysis system that is able to detect objects, sounds, speech, text, and event types (say, a news broadcast or a protest) in a video collection. In practice, this allows users to run semantic queries within video collections. If the system is properly trained, a user could ask it, for example, to find images of individuals performing a specific action or objects of a particular kind in a collection of thousands of videos. This means that practitioners can use a system that can search thousands or even millions of videos to answer questions like: How many videos show helicopters dropping things (e.g., barrel bombs or bodies)? How many videos may be communiques from a faction within a conflict? What are the commonalities among a group of videos? These search efforts can be done in a fraction of the time that it would take for a human analyst to perform the same task. AI projects like E-LAMP will make practitioners more effective by allowing small teams to quickly examine and analyze large amounts of evidence. While systems like this could become valuable automated research assistants that aid in the process of knowledge discovery, they will remain instruments for human domain experts. E-LAMP cannot yet find all actions that are relevant for a case, for example, torture or physical abuse, but it is able to find potential markers for those actions that could then be reviewed by a practitioner.

The big opportunity for human rights practice lies in the extraordinary potential that artificial intelligence has to support problem solving, pattern detection, and knowledge discovery. But this kind of capability will not simply materialize from thin air. There is a time-bound opportunity for practitioners to influence artificial intelligence before it completely leaves its infancy. Legal experts could provide important guidance as to how, ethically, AI’s findings could be verified in courts, how AI may shape the definition of legal personhood, and how data being analyzed in the cloud can be protected from exposure to nefarious actors. For this, human rights practitioners need to engage early and often with the technologists and organizations that are driving the technological future of AI.

B Blockchain

In 2008, a person or group of persons under the pseudonym Satoshi Nakamoto published a paper proposing Bitcoin, a peer-to-peer electronic currency aimed at supporting transactions without a central financial institution.Footnote 30 Since then, Bitcoin has drawn attention from a wide variety of actors and entities, ranging from banks and regulators to organized criminals and futurists. Looking back, it is not hard to see why it is considered a potential disrupter of national, regional, and international financial systems. It took only two years from its formal launch in 2009 for this revolutionary virtual currency to achieve parity with the US dollar.Footnote 31 And it took only a few additional years to reach an all-time-high $1,216.73 exchange rate.Footnote 32 Surprisingly, all of this happened with a decentralized, public, and open infrastructure.

But beyond its disruptive capacity and its direct challenge to institutions that reproduce and maintain inequalities, like banks and international financial regulators, there are other aspects of Bitcoin that could advance the future of transparency and accountability. Its potentially transformative power for human rights practice is anchored in the innovative design of the technology underneath the currency that facilitates public trust without the need for a third party controlling the currency. This technology is commonly referred to as “blockchain.”

Blockchain refers to a distributed network of computers in which digital transactions are recorded in a public database using cryptography to digitally sign them and connect them to previous transactions. This process creates a chain of grouped transactions, or blocks, that cannot be tampered with or altered. One way to think of this is as if everyone in a network of peers acted as a digital notary. In this network, transactions are notarized by multiple notaries, and notaries publicly broadcast the existence of a record by linking it to an existing and already notarized transaction or document in a public ledger. Among the most interesting attributes of such a system is the fact that trust is not placed in the nodes, but rather in the strength and openness of the network and the science behind the protocol for transactions.

Outside of currency exchange, people can access bitcoins not by labor, but rather by computation. The currency is ephemeral and is not backed by gold or any other representation in the physical space. Its creation is the result of software and hardware computations that have to solve increasingly complex mathematical operations. Once a solution is found, bitcoins are the reward. This process is called “mining.” Each one is awarded to the person behind the computation using a unique identifier, also the result of computation, that the user obtains when installing the mining software. Such a key is also referred as a wallet, and the wallet is where the awarded (or purchased) bitcoins are stored. In other words, besides exchanging them directly, as a person could do with any foreign currency, the only way to get them is by solving computational problems.

Blockchain has several human rights applications. It could be used to certify that a video, image, or other type of digital document existed at a given time. This attribute, normally referred as proof of existence, increases the evidentiary weight of a digital asset, like a video or image of human rights abuse that appeared in social media, by increasing the ability of investigators to validate or reject claims of authenticity over the material and map its chain of custody. Preliminary uses of this technology can be seen in projects like Video Vault,Footnote 33 a system that I created and maintain, which allows human rights practitioners to preserve digital resources of any kind for later reference. Video Vault facilitates the verification of digital assets by providing an online content sample with a trusted time stampFootnote 34 reflecting the collection time. This time stamp is added as a transaction to the blockchain, where it can be accessed to validate that such asset, picture, video, or web page existed at a particular point in time. Digital assets collected by an individual or organization can in this way be “notarized” and added to the blockchain to create a public ledger, to enhance the verification of media that may contain evidence of human rights abuses.

It is also possible to imagine applications for blockchain technology in other areas of social activity that relate to human rights practice. One example is trade and the distributed manufacturing or production of goods. Technology like blockchain could be used to create a chain of trust or custody around specific steps of manufacturing, thus increasing the ability to monitor the life cycles of the goods we consume. Such a system could, at least in theory, enhance the ability of agencies, unions, regulators, and civil society to enforce compliance with laws and guidelines that defend the rights of workers, indigenous people, and the environment, to name a few.

This traceability feature is already part of the offerings of companies like ProvenanceFootnote 35 to food producers and supply chain watchdogs. What is learned from this process could benefit its implementation in fact-finding in human rights practice. Provenance is a UK-based company using currencies like Bitcoin and Ethereum, which are implementations of blockchain, to create a public record of the supply chain from the origin of a product to its end consumer. This technology could help consumers learn where their clothes were made or where the fish they are thinking about purchasing for dinner was netted. Perhaps more importantly, it could help consumers understand the environmental and labor conditions where their goods were produced or obtained.

As is often the case with new technologies, a group of forward-looking technologists and entrepreneurs have proposed other creative applications for blockchain, including to increase transparency and reduce corruption in public spending by governments or the use of charitable funds; to create efficient ways to transfer currency to support basic rights, like access to health care and food security, when traditional financial institutions fail in the context of humanitarian crisis; to create alternative and inclusive systems for land registration for migrants; or to provide access to identities in order to prevent discrimination of ex-convicts.

A recently formed e-governance consultancy called Humanitarian BlockchainFootnote 36 is attempting to make some of these ideas a reality. Because of its distributed and open nature as well as its reliance on sound mathematical concepts, blockchain is resistant to manipulation. It does not matter if a large government or a local paramilitary organization disagrees with what it carries. Because of its distributed nature, the public ledger will remain unmodified and available to its users. Recently, researchers from the Massachusetts Institute of Technology and Tel Aviv University proposed a decentralized personal data-management system that would ensure that users own and control their data. Such a system would enhance the privacy of sensitive data, including that of human rights practitioners.Footnote 37

Today, blockchain-based systems may be complicated to access and understand by grassroots organizations, but this will rapidly change. It is likely that at this pace, just as it is beginning to happen with encryption and security mechanisms like Secure Sockets Layer/Transport Layer Security (SSL/TLS), which is used to secure most transactions over the Internet, and end-to-end encryption, which is used to secure communications on tools like WhatsApp and Signal, the benefits of this technology will soon be available in seamless, low-cost ways for practitioners of all kinds.

C Open Hardware, Affordable Sensors, and the Internet of Things

For many years, human rights technology has been limited to software. Software can be written on virtually any computer. There are also numerous well-documented programming languages that, with some patience and basic literacy, anyone can learn. Furthermore, there is no need for a project to start from scratch, because with the growth of open source and free software, many libraries and code bases can help anyone jump-start a project. Such availability and simplicity were, without a doubt, key to the explosion of software products for many disciplines, including human rights practice.

Over the past decade, slowly but incrementally, hardware has followed suit. Similarly to software, the advent of open source hardware has created a vast arena for experimentation and has expanded the toolkit for problem solving that practitioners can access. In 2003, Hernando Barragán, a master’s student at the Interaction Design Institute Ivrea in Italy, created Wiring as part of his thesis project. Wiring was aimed at lowering the barrier to accessing prototyping tools for those interested in developing electronics. It consists of the complete tool set needed to develop functional electronic prototypes, from an integrated development environment (IDE) and a simple programming language for microcontrollers to a bootloader to update programs and a well-documented online documentation library. In a controversial move, Barragán’s thesis advisors and fellow students copied the project to create Arduino in 2005. Arduino rapidly became the platform of choice for a new generation of open source hardware tinkerers. By 2013, there were 700,000 Arduino devices registered and at least an equal number of clones or copies. The number of prototyping platforms grew, and there are now dozens of different boards and platforms to choose from. The projects enabled by this new generation of hardware range from simple LED controlling projects to sophisticated motor control and sensor management devices. Some of these projects illustrate what we could see at the intersection of human rights practice and open hardware in the near future, especially as issues of environmental justice are increasingly rooted in human rights, including but not limited to nuclear disasters, oil spills, and water safety.

An important example of environmental justice-oriented open hardware involved the creation of sensors to measure radioactivity in the aftermath of the March 11, 2011 earthquake and destructive tsunami that severely damaged the Daiichi nuclear power plant in Fukushima, Japan. The radiation leak that occurred at the power plant was followed by panic and misinformation. Citizens with enough money acquired Geiger counters to measure the scale of the catastrophe, both for personal safety and for the eventual accountability of officials whom they felt were not appropriately responding to the crisis. These devices, which are designed to measure ionizing radiation, became a critical source of reliable information for the affected population. During the early response, the supply of these devices began to decline and prices became too high for many citizens to purchase them. A group of developers, activists, and responders held Skype discussions to brainstorm a possible solution. After a few days, this group met in person at Tokyo Hackerspace. Within a week, they had created the first bGeigie, a DIY Geiger counter that could increase access to reliable data, and they set off for Fukushima. Today, that project has evolved into Safecast, founded by Sean Bonner, Joi Ito, and Pieter Franken as an international, volunteer-centered organization devoted to open citizen science for the environment.Footnote 38 A similar story is that of the Public Laboratory for Open Technology, founded in the wake of the April 2010 Deepwater Horizon oil spill in the Gulf of Mexico on the BP-operated Macondo Prospect. During the spill, there was an information blackout for residents of the region. In response, a group of concerned residents, environmental advocates, designers, and social scientists launched DIY kite and balloon aerial photography kits over the spill to collect real-time data about its impact.Footnote 39 The success of the mapping effort encouraged the group to found Public Lab as a research and social space for the development of low-cost tools for community-based environmental monitoring and assessment. Among the tools that Public Lab offers is a Desktop Spectrometry Kit, which puts a low-cost, easy-to-use spectroscope or spectrophotometer in the hands of any individual or organization interested in collecting spectra, which are the electromagnetic “fingerprints,” or unique identifiers, of materials.Footnote 40

The above examples comprise a small sample of the vibrant community around microcontrollers, sensors, and citizen science. They can help us imagine how the availability of easy-to-use and low-cost sensors and measurement kits may have a transformative effect in the future of human rights. Could we measure the fingerprint of a tear gas canister with sufficient accuracy to point to its origin? Could we allow for communities to directly and reliably collect information about the quality of the water before and after an extractive industry development? Could we take samples with remote equipment of chemical agents used against vulnerable populations? Human rights practitioners need to engage with the vibrant open source hardware community to find answers to questions like this. While the above uses may not yet seem related to traditional human rights work, this may change rapidly as environmental issues, like those related to extractive industries or access to water, permeate human rights practice. More importantly, technologies like those discussed above are aligned with the type of technology transfer that Dalindyebo Shabalala calls for in Chapter 3, both because they enable low-cost and broad access, and because they can contribute to the creation of complex monitoring ecosystems that could inform future human rights frameworks.

Hardware is not only sensors and microcontrollers. Over the past ten years, there have been efforts to reduce the cost of, and increase access to, computers. Perhaps the most known example of this is Raspberry Pi. Raspberry Pi is a series of credit card–sized single-board computers developed in the United Kingdom by the Raspberry Pi Foundation to promote the teaching of basic computer science in schools and developing countries.Footnote 41 More importantly, it is open source and available anywhere in the world for under $50. The advent of this device has created a great deal of excitement among developers and technologists, as the processing power and the possibilities are immense when compared to a microcontroller like Arduino. This excitement can be seen in its adoption. Since the launch of its first model, the Raspberry Pi 1 Model B, in February 2012, more than ten million have been sold.Footnote 42 Enthusiasts and developers have started to create potentially relevant projects for human rights practice. For example, developers have used Raspberry Pi computers to create specialized routers that increase the anonymity of their users. Others have created advanced remote sensor units that can automatically consume data and broadcast it in real time.

The Novena laptop, launched in 2014, was designed for users who care about free software and open source, or who want to modify and extend their hardware. Its creator, Andrew “bunnie” Huang, promoted it as “a laptop with no secrets.”Footnote 43 It is this claim that makes the Novena interesting for the future of human rights practice. A laptop with nothing but modifiable and open source hardware and software may allow practitioners to access hardware that they can trust to carry out sensitive work and transfer sensitive information. Open source hardware and software are potentially more trustworthy than proprietary technology, as they can be reviewed and audited by anyone who is willing to do so.

The future of Novena is unclear, as it has not yet found commercial success, but its existence has ignited a generation of entrepreneurs willing to compete with large manufacturers to offer options for general users. An important example of this is the Librem 13, a laptop available since 2016 that promises to respect privacy and enhance security in “every chip in the hardware, every line of code in the software.”Footnote 44 The laptop ships with the option of two operating systems, Purism OS or Qubes OS, which are both well regarded in the security and open source communities as strong and reliable options for those with security and privacy in mind. It also includes hardware kill switches that shut down the microphone, camera, Wi-Fi connection, and Bluetooth. These are important characteristics that practitioners should consider, given the scope of unchecked surveillance by governments exposed to a broad public by the revelations of Edward Snowden and other whistleblowers, as described by Lisl Brunner in Chapter 10.

If these devices survive and evolve, or if they encourage other open and secure products, they will provide valuable tools for human rights practitioners seeking to protect the data of vulnerable populations. As the market for open source or privacy-enhancing hardware is in its early stages of development, it is unclear whether the scale of production will be sufficient to reach human rights practitioners around the globe. Scale will not only impact the affordability of a device, but also determine whether it moves into common usage. If it does not, it could raise red flags for governments when crossing borders or adversarial checkpoints.

It is essential that secure tools are not just available for human rights researchers but are also adopted by wider communities. The general adoption of features by nonspecialized products makes the use of these features by human rights researchers less risky, because they are less identified with behavior the state wants to control. A powerful example of this is the adoption of end-to-end encryption by the popular messaging application WhatsApp. In 2016, WhatsApp announced that it was making end-to-end encryption the communication default for its billion-plus users.Footnote 45 The notion of end-to-end encryption, which refers to the use of communications systems in which only the originator and recipient of a message can read its contents, is nothing new to human rights practice. For many years, dozens of human rights and technology advocates have promoted end-to-end encryption as critical for the future of journalistic and human rights work,Footnote 46 but it was not until this development that such technology became widely available. If projects like the Novena and Librem 13 laptops successfully compete for a small fraction of the market share of companies like Lenovo and Hewlett-Packard, they could create pressure for other manufacturers to adopt the privacy-enhancing features that distinguish them, and in doing so offer secure computing alternatives for human rights practitioners.

Beyond the expansion of these existing technologies, we are also likely to see innovation around the Internet of Things, or IoT, which references the increased connectivity or networking among devices of all kinds and purposes. The IoT, which allows the devices of smart homes and smart cities to be controlled remotely, and in many cases automatically, is linked directly to the growing availability of open hardware and sensors. From thermostats and refrigerators to wearable devices and new forms of personal and mobile devices, we are likely to see connected devices in virtually every aspect of human life. This will likely create excellent opportunities for new forms of fact-finding and research, but will also likely create new perils for human rights practitioners and general users alike. Perhaps the biggest challenge will come from the ability that governments and organized criminals have developed to access and analyze data stored and in transit. We are only starting to understand what this might mean, for instance in recent analyses of the privacy implications of fitness trackers,Footnote 47 for how law enforcement could use our intelligent personal digital assistants in criminal and national security investigations,Footnote 48 and how connected home cameras could be infiltrated by organized criminals, governments, and other nefarious actors.Footnote 49

V Conclusion

Events of the past five years have significantly shaped the discourse around human rights technology. What has been learned and confirmed after Edward Snowden’s revelations of mass and unchecked surveillance by nation-states and corporations has necessarily focused the attention of global civil society on the dire effects of surveillance and the need to counter them.Footnote 50 The state of surveillance has cast a dystopian shadow over the future of human rights, as Mark Latonero points out in Chapter 7, where practitioners fear technology will be used for control rather than liberation. The hypersurveillance practices of our times, as well as the role that technology plays in them, are indeed an extensive attack on human rights.Footnote 51 However, human rights practitioners should not let that hinder their ability to imagine alternative visions that could guide the intersection of human rights and technology.

The technologies discussed in this chapter do not represent an exhaustive compilation of trends that will shape the future of human rights practice, but rather are a starting point to expand our understanding of what technology could do for us in the near future. Challenging current technology transfer models and expanding the ecosystem of actors around them is key, because in creating a more inclusive, deliberate, and forward-looking interdisciplinary field around human rights technology, we will be creating a better opportunity to advance the larger human rights field.

A change in the dynamics of technology transfer will challenge the traditionally asymmetrical power dynamics between human rights practitioners and their transnational supporters. We can foster this by promoting capacity-building in the Global South, favoring open source software and hardware, and critically evaluating budgetary allotments to technology. In the process, grassroots practitioners will be at the helm of designing and adapting human rights technology. We must be conscious that this will challenge the growth of professional opportunities for Global North practitioners. There are important questions that will be critical for any next step. Can human rights play a role in the governing of technology? What role can the private sector play in advancing human rights technology? Can human rights challenges drive technological innovation? To answer them, we should be open to interdisciplinary conversations like the one taking place in this volume, and encourage an inclusive and participatory multistakeholder ecosystem.

The approach of human rights practitioners to technology will be a determining factor in their ability to advance accountability, transparency, and justice in the years to come. This book is an invitation to imagine the future of the intersection of human rights technology and human rights practice. For this intersection to benefit practitioners, it must adopt a solidarity-based framework for technology transfer.

A solidarity approach requires technologists to understand and respect the cultural context of the environment they are working within. They must reimagine the relationship as bidirectional and characterize their counterparts in technology transfer as active collaborators. Technologists must establish partner relationships with practitioners, from designing solutions that involve technology all the way through to evaluating them. Practitioners should also be able to tinker with and modify the technologies they are using, and technologists should support them in doing so. This commitment should be reflected in the timeline, budget, and conceptualization of the project. Solidarity requires careful consideration of how technology may displace human resources or compete with scarce resources available in the human rights funding landscape. This technology transfer approach prioritizes human capacity and sustainability above technical complexity and sophistication. Finally, technologists must continuously question their own role within larger power structures – are they helping to reduce the burden of inequality and dependency, or are they just recreating it through the deployment of technology? Ultimately, a solidarity approach demands that technologists not contribute to long-term inequalities while working with human rights workers and communities in crisis.

Footnotes

10 Digital Communications and the Evolving Right to Privacy

1 All opinions expressed in this chapter are those of the author alone and should not be attributed to any organization. Lisl would like to thank Sarah St. Vincent for her thoughtful comments on prior versions of this chapter.

2 Report of the Special Rapporteur on the Right to Privacy, Joseph A. Cannataci, ¶ 20, U.N. Doc. A/HRC/31/64 (March 8, 2016) (“Cannataci Report”); D. Solove, “Conceptualizing Privacy” (2002) 90 California Law Review 1088–89; D. Banisar and S. Davies, “Global Trends in Privacy Protection: An International Survey of Privacy, Data Protection, and Surveillance Laws and Developments” (1999) 18 John Marshall Journal of Computer & Information Law 1113 at 6–8.

3 See, e.g., H. Nissenbaum, Privacy in Context: Technology, Policy, and the Integrity of Social Life (Stanford, CA: Stanford University Press, 2010), pp. 6970, 81–88; Solove, “Conceptualizing Privacy,” at 1109–24. Solove has identified at least six different but interrelated conceptualizations of the essence of privacy: 1) the right to be let alone, 2) limited access to self, 3) secrecy, 4) control over personal information, 5) personhood (the protection of one’s personality, individuality, and dignity), and 6) intimacy (control over one’s intimate relations or aspects of life).

4 International Covenant on Civil and Political Rights, in force March 23, 1976, GA res. 2200A (XXI), 21 UN GAOR Supp. (No. 16) at 52, 999 UNTS 171, art. 17.

5 Universal Declaration of Human Rights, GA res. 217A (III), U.N. Doc. A/810 at 71 (1948). Article 12 omits the two occurrences of the word “unlawful” from the first paragraph.

6 European Convention for the Protection of Human Rights and Fundamental Freedoms, Rome, November 4, 1950, in force September 3, 1953, ETS 5; 213 UNTS 221. According to Article 8, “1. Everyone has the right to respect for his private and family life, his home and his correspondence. 2. There shall be no interference by a public authority with the exercise of this right except such as is in accordance with the law and is necessary in a democratic society in the interests of national security, public safety or the economic wellbeing of the country, for the prevention of disorder or crime, for the protection of health or morals, or for the protection of the rights and freedoms of others.”

7 Charter of Fundamental Rights of the European Union, October 26, 2012, in force December 1, 2009, 2010 O.J. (C83) 389 (March 30, 2010), art. 7 (“Everyone has the right to respect for his or her private and family life, home and communications.”).

8 American Convention on Human Rights, San Jose, November 22, 1969, in force July 18, 1978, OAS Treaty Series No. 36; 1144 UNTS 123. Article 11 establishes: “1. Everyone has the right to have his honor respected and his dignity recognized. 2. No one may be the object of arbitrary or abusive interference with his private life, his family, his home, or his correspondence, or of unlawful attacks on his honor or reputation. 3. Everyone has the right to the protection of the law against such interference or attacks.”

9 League of Arab States, Arab Charter on Human Rights, September 15, 1999. Article 17 establishes: “Private life is sacred, and violation of that sanctity is a crime. Private life includes family privacy, the sanctity of the home, and the secrecy of correspondence and other forms of private communication.”

10 Dudgeon v. United Kingdom, Eur. Ct. H.R., App. No. 7525/76 (October 22, 1981).

11 Atala Riffo and daughters v. Chile, Judgement, Inter-Am. Ct. H.R. (ser. C) No. 239, ¶¶ 161–78 (February 24, 2012); Artavia Murillo et al. (“In Vitro Fertilization”) v. Costa Rica, Judgement, Inter-Am. Ct. H.R. (ser. C) No. 257 (November 28, 2012); Airey v. Ireland, Eur. Ct. H.R., App. No. 6829/73 (October 9, 1979).

12 See, e.g., Cruzan v. Director, Missouri Department of Health, 497 U.S. 261 (1990) (describing lower court judgments on individual decisions to terminate medical treatment that were framed in terms of privacy rights, but declining to address the case in those terms).

13 Report of the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression, Frank La Rue, ¶ 24, U.N. Doc. A/HRC/23/40 (2013) (“La Rue Report 2013”); PEN America, Global Chilling: The Impact of Mass Surveillance on International Writers (2015); Human Rights Watch and the American Civil Liberties Union, With Liberty to Monitor All: How Large-Scale U.S. Surveillance is Harming Journalism, Law, and American Democracy (July 2014).

14 In Vitro Fertilization, ¶ 143; see also Cannataci Report, ¶ 8.

15 See, e.g., General Comment No. 31, Nature of the General Legal Obligation on States Parties to the Covenant, U.N. Doc. CCPR/C/21/Rev.1/Add. 13 (2004), ¶6.

16 Velasquez Rodriguez v. Honduras, Judgement, Inter-Am. Ct. H.R. (ser. C) No. 4, ¶ 166 (July 29, 1988); see also General Comment No. 31, ¶¶ 7, 13; Airey v. Ireland, ¶ 32.

17 The Human Rights Committee is the UN body charged with receiving periodic reports from states parties to the ICCPR on implementation of the treaty, as well as interpreting the ICCPR through its general comments and, where a state has recognized its competence, through reports issued in response to communications. International Covenant on Civil and Political Rights, arts. 28, 40–41.

18 General Comment No. 31, ¶ 8.

19 Klass and others v. Germany, Eur. Ct. H.R., App. No. 5029/71 (September 6, 1978), ¶ 42.

20 W. Seltzer, “Population Statistics, The Holocaust, and the Nuremberg Trials” (1998) 24 Population and Development Review 511–52.

21 See, e.g., T. Coombes, “Lessons from the Stasi,” The European (April 1, 2015).

22 General Comment No. 16: Article 17 (Right to Privacy), U.N. Doc. CCPR/GC/16 (1988), ¶ 8.

23 See, e.g., A. Deeks, “An International Legal Framework for Surveillance” (2015) 55 Virginia Journal International Law 291368 at 300, 301–05, 313 (“Most scholars agree that international law either fails to regulate spying or affirmatively permits it.”); R. J. Bettauer, “Questions Relating to the Seizure and Detention of Certain Documents and Data (Timor-Leste v. Australia). Provisional Measures Order” (2014) 108 American Journal of International Law 763–69. In its first case involving espionage issues, the International Court of Justice determined that “a State has a plausible right to the protection of its communications with counsel relating to an arbitration or to negotiations, in particular, to the protection of correspondence between them, as well as to the protection of confidentiality of any documents and data prepared by counsel to advise that State in such a context.” Questions Relating to the Seizure and Detention of Certain Documents and Data (Timor Leste v. Australia), International Court of Justice, Request for the Indication of Provisional Measures, Order of March 3, 2014, ¶ 27.

24 G. Greenwald, “NSA collecting phone records of millions of Verizon customers daily,” The Guardian, June 6, 2013; Privacy and Civil Liberties Oversight Board (United States), Report on the Telephone Records Program Conducted under Section 215 of the USA PATRIOT Act and on the Operations of the Foreign Intelligence Surveillance Court (January 23, 2014), pp. 21–31 (“PCLOB Report on Section 215”). Through these methods, it was estimated that the US government may have retained records related to more than 120 million telephone numbers.

25 Privacy and Civil Liberties Oversight Board (United States), Report on the Surveillance Program Operated Pursuant to Section 702 of the Foreign Intelligence Surveillance Act (July 2, 2014), pp. 32–41; G. Greenwald and E. MacAskill, “NSA Prism program taps into user data of Apple, Google, and others,” The Guardian, June 7, 2013. For a description of several US intelligence programs disclosed by Edward Snowden, see A. Toh, F. Patel, and E. Gotein, “Overseas Surveillance in an Interconnected World,” Brennan Center for Justice, New York University School of Law (2016), pp. 5–10.

26 E. MacAskill et al., “GCHQ taps fibre-optic cables for secret access to world’s communications,” The Guardian, June 21, 2013.

27 S. Ackerman and J. Ball, “Optic Nerve: Millions of Yahoo webcam images intercepted by GCHQ,” The Guardian, February 28, 2014.

28 J. Ball, J. Borger, and G. Greenwald, “Revealed: How U.S. and U.K. spy agencies defeat internet privacy and security,” The Guardian, September 6, 2013.

29 E. MacAskill et al., “GCHQ intercepted foreign politicians’ communications at G20 summits,” The Guardian, June 17, 2013; E. MacAskill and J. Borger, “New NSA leaks show how US is bugging its European allies,” The Guardian, June 30, 2013; J. Burke, “NSA spied on Indian embassy and UN mission, Edward Snowden files reveal,” The Guardian, September 25, 2013; J. Ball, “NSA monitored calls of 35 world leaders after US official handed over contacts,” The Guardian, October 25, 2013.

30 “The NSA’s Secret Spy Hub in Berlin,” Der Spiegel, October 27, 2013.

31 F. Johannes and J. Follorou, “In English: Revelations on the French Big Brother,” Le Monde, July 4, 2013.

32 J. Borger, “GCHQ and European spy agencies worked together on mass surveillance,” The Guardian, November 1, 2013.

33 I. Poetranto, “The Kremlin’s new Internet surveillance plan goes live today,” The Citizen Lab, November 1, 2012, https://citizenlab.ca/2012/11/the-kremlins-new-internet-surveillance-plan-goes-live-today/; S. Walker, “Russia to monitor ‘all communications’ at Winter Olympics in Sochi,” The Guardian, October 6, 2013.

34 OpenNet Initiative, Internet Filtering in China (2009), pp. 14–17; Human Rights Watch, Freedom of Expression and the Internet in China (2001).

35 Human Rights Watch, They Know Everything We Do: Telecom and Internet Surveillance in Ethiopia (2014).

36 Privacy International, Shadow State: Surveillance, Law and Order in Colombia (2015), pp. 27–31.

37 See, e.g., B. Marczak et al., “Mapping Hacking Team’s ‘Untraceable’ Spyware,” The Citizen Lab, February 2014; W. R. Marczak, J. Scott-Railton, and M. Marquis-Boire, “When Governments Hack Opponents: A Look at Actors and Technology,” Twenty-Third USENIX Security Symposium (August 2014); see also A. Hern, “Hacking Team hack casts spotlight on murky world of state surveillance,” The Guardian, July 11, 2015; WikiLeaks, “The Hacking Team Archives,” July 8, 2015, https://wikileaks.org/hackingteam/emails/.

38 “The Right to Privacy in the Digital Age,” U.N. Doc. A/RES/68/167, December 18, 2013).

39 “The Right to Privacy in the Digital Age,” U.N. Doc. A/RES/69/66, December 18, 2014.

40 “The Right to Privacy in the Digital Age,” U.N. Doc. A/HRC/RES/28/16, April 1, 2015;

41 A. Alexander, “Digital surveillance ‘worse than Orwell,’ says new UN privacy chief,” The Guardian, August 24, 2015.

42 Uniting and Strengthening America by Fulfilling Rights and Ensuring Effective Discipline Over Monitoring Act of 2015 (USA FREEDOM Act), Public Law 114–23, 129 Stat. 268 (June 2, 2015).

43 The White House, Presidential Policy Directive/PPD-28, Signals Intelligence Activities, January 17, 2014.

44 Loi No. 2015–912 of July 24, 2015 (France); “Swiss endorse new surveillance powers,” BBC, September 25, 2016; “Switzerland votes in favour of greater surveillance,” AFP, September 25, 2016; “Wet op de inlichtingen – en veiligheidsdiensten 20” (Netherlands), September 1, 2015; Y. Bahceli, “‘Dutch intelligence-gathering reform bill sparks privacy concerns,” Reuters, September 1, 2015.

45 Investigatory Powers Act of 2016 (November 29, 2016), www.legislation.gov.uk/ukpga/2016/25/contents/enacted/data.htm.

46 See, e.g., D. Anderson QC, “Oral Evidence Taken Before the Joint Committee for the Investigatory Powers Bill,” December 2, 2015, Questions 61–75, www.parliament.uk/documents/joint-committees/draft-investigatory-powers-bill/oral-evidence-draft-investigatory-powers-committee.pdf. But see Cannataci Report, ¶ 39; E. MacAskill, “‘Extreme surveillance’ becomes UK law with barely a whimper,” The Guardian, November 19, 2016; I. Ashok, “UK passes Investigatory Powers Bill that gives government sweeping powers to spy,” International Business Times, November 18, 2016.

47 See, e.g., Malone v. United Kingdom, Eur. Ct. H.R., App. No. 8691/79, ¶ 84 (August 2, 1984); Office of the UN High Commissioner for Human Rights, “The right to privacy in the digital age” (“OHCHR Report”), U.N. Doc. A/HRC/27/37, ¶ 19; Escher v. Brazil, Judgement, Inter-Am. Ct. H.R. (ser. C) No. 200, ¶ 114 (July 6, 2009).

48 See, e.g., Amann v. Switzerland, Eur. Ct. H.R., App. No. 27798/95, ¶ 69 (February 16, 2000); Rotaru v. Romania, Eur. Ct. H.R. App. No. 28341/95, ¶ 46 (Grand Chamber, May 5, 2000); S. and Marper v. United Kingdom, App. Nos. 30562/04 and 30566/04, ¶ 86 (Grand Chamber, December 4, 2008); Digital Rights Ireland v. Minister of Communications, Eur. Ct. H.R., App. Nos. 293/12 and 594/12, ¶¶ 34–35 (April 8, 2014).

49 Report of the Special Rapporteur on the promotion and protection of human rights and fundamental freedoms while countering terrorism, Martin Scheinin, ¶¶ 26–28, U.N. Doc. A/HRC/14/46 (May 17, 2010) (“Scheinin Report 2010”); Report of the Special Rapporteur on the promotion and protection of human rights and fundamental freedoms while countering terrorism, Martin Scheinin, ¶¶ 35, 48, U.N. Doc. A/HRC/10/3 (Feb. 4, 2009) (“Scheinin Report I 2009”); Tristan Donoso v. Panama, Judgement, Inter-Am. Ct. H.R. (ser. C) No. 193, ¶ 83 (January 27, 2009).

50 Report of the Special Rapporteur for the promotion and protection of the right to freedom of opinion and expression, David Kaye, ¶ 12, U.N. Doc. A/HRC/29/32 (May 22, 2015). As David Kaye has noted, “[e]ncryption and anonymity provide individuals and groups with a zone of privacy online to hold opinions and exercise freedom of expression without arbitrary and unlawful interference or attacks.” Footnote Ibid., ¶ 16; see also La Rue Report 2013, ¶¶ 23, 47–49.

51 While Article 8(2) of the European Convention specifies this, human rights bodies, experts, and tribunals have interpreted the ICCPR and the American Convention to require this test as well. See, e.g., Report of the Special Rapporteur on the promotion and protection of human rights and fundamental freedoms while countering terrorism, Martin Scheinin, ¶¶ 16–19, U.N. Doc. A/HRC/13/37 (December 28, 2009) (“Scheinin Report II 2009”); OHCHR Report ¶ 23; Escher v. Brazil, ¶ 116; Weber and Saravia v. Germany, Eur. Ct. H.R., App. No. 54934/00, ¶ 80 (June 29, 2006).

52 OHCHR Report, ¶ 23; Escher v. Brazil, ¶¶ 130–31; Zakharov v. Russia, Eur. Ct. H.R., App. No. 47143/06, ¶ 229 (Grand Chamber, December 4, 2015).

53 See, e.g., S. and Marper v. United Kingdom, ¶ 101.

54 Leander v. Sweden, Eur. Ct. H.R., App. No. 9248/41, ¶ 59 (March 26, 1987); S. and Marper v. United Kingdom, ¶ 102; Weber and Saravia, ¶ 106; Zakharov v. Russia, ¶ 232.

55 Klass v. Germany, ¶ 42; Weber and Saravia v. Germany, ¶ 106; Zakharov v. Russia, ¶ 232; see also OHCHR Report, ¶ 25.

56 See, e.g., Liberty and others v. United Kingdom, Eur. Ct. H.R., App. No. 58243/00, ¶¶ 64–70 (July 1, 2008); Malone v. United Kingdom, ¶¶ 80–82; Rotaru v. Romania, ¶ 62; Amann v. Switzerland, ¶ 63; Association for European Integration and Human Rights and Ekimdzhiev v. Bulgaria, Eur. Ct. H.R., App. No. 62540/00, ¶ 93 (June 28, 2007).

57 See, e.g., Kennedy v. United Kingdom, Eur. Ct. H.R., App. No. 26839/05, ¶ 155 (March 18, 2010); see also Klass v. Germany; Zakharov v. Russia; Szabo and Vissy v. Hungary, Eur. Ct. H.R., App. No. 37138/14 (January 12, 2016).

58 Kennedy v. United Kingdom. Although Weber and Saravia was an admissibility decision, the court deemed the German G-10 law to be prima facie consistent with the European Convention. The law provided for nontargeted communications surveillance in order to identify or prevent six specific offenses: “1) an armed attack on the Federal Republic of Germany; 2) the commission of international terrorist attacks in the Federal Republic of Germany; 3) international arms trafficking within the meaning of the Control of Weapons of War Act and prohibited external trade in goods, data-processing programmes and technologies in cases of considerable importance; 4) the illegal importation of drugs in substantial quantities into the territory of the Federal Republic of Germany; 5) the counterfeiting of money (Geldfälschung) committed abroad; 6) the laundering of money in the context of the acts listed under points 3 to 5.” Weber and Saravia, ¶ 27.

59 See, e.g., Klass v. Germany, ¶ 51.

60 See, e.g., Escher v. Brazil, ¶ 131.

61 Szabo and Vissy v. Hungary, ¶ 56; Weber and Saravia v. Germany, ¶ 95; see also Escher v. Brazil, ¶ 131; OHCHR Report, ¶ 28; La Rue Report 2013, ¶ 81.

62 OHCHR Report, ¶ 23; Klass v. Germany, ¶ 51.

63 Klass v. Germany; Weber and Saravia v. Germany. As mentioned above, Weber and Saravia was an admissibility decision rather than a judgment on the merits, but the court conducted a thorough examination of the G-10 law and determined that there were “adequate and effective guarantees against abuses of the State’s strategic monitoring powers,” making the applicants’ claims under Article 8 “manifestly ill-founded.” Weber and Saravia, ¶¶ 137–38.

64 Kennedy v. United Kingdom.

65 Zakharov v. Russia, ¶¶ 244–52; Amann v. Switzerland; Malone v. United Kingdom; Association for European Integration and Human Rights and Ekimdzhiev v. Bulgaria; Rotaru v. Romania; Szabo and Vissy v. Hungary.

66 See, e.g., Klass v. Germany, ¶ 48; La Rue Report 2013, ¶ 50; Szabo and Vissy, ¶¶ 68–70; Report on Terrorism and Human Rights, I/A C.H.R., OEA/Ser.L/V/II.116 Doc. 5 rev. 1 corr. (2002) ¶ 371; Escher v. Brazil, ¶ 115.

67 La Rue Report 2013, ¶ 62; Scheinin Report I 2009, ¶ 30.

68 Liberty v. United Kingdom, ¶ 63; OHCHR Report, ¶ 20.

69 OHCHR Report, ¶ 20.

70 Weber and Saravia v. Germany, ¶¶ 117, 137–38. But see La Rue Report 2013, ¶ 59 (suggesting that the G-10 law’s provisions on warrantless interception for national security purposes is overly broad).

71 Weber and Saravia, ¶¶ 96–102, 115–22, 137.

72 S. and Marper v. United Kingdom, Eur. Ct. H.R., App. No. 30562/04 (Grand Chamber, December 12, 2008) ¶ 125.

73 Liberty v. United Kingdom, ¶ 64.

74 Zakharov v. Russia, ¶¶ 244–52.

75 Footnote Ibid., ¶¶ 261–72.

76 Szabo and Vissy v. Hungary, ¶¶ 69, 73.

77 See S. St. Vincent, “Did the European Court of Human Rights Just Outlaw ‘Massive Monitoring of Communications’ in Europe?,” Center for Democracy and Technology, January 13, 2016, https://cdt.org/blog/did-the-european-court-of-human-rights-just-outlaw-massive-monitoring-of-communications-in-europe/.

78 Big Brother Watch and others v. United Kingdom, Eur. Ct. H.R., App. No. 58170/13 (September 4, 2013); Bureau of Investigative Journalism and Alice Ross v. United Kingdom, Eur. Ct. H.R., App. No. 62322/14 (September 11, 2014); 10 Human Rights Organizations and others v. United Kingdom, Eur. Ct. H.R., App. No. 24960/15 (May 20, 2015).

79 Scheinin Report I 2009, ¶ 30.

80 Report of the Special Rapporteur on the promotion and protection of human rights and fundamental freedoms while countering terrorism, Ben Emmerson, U.N. Doc. A/HRC/25/59 (March 11, 2014) ¶¶ 52, 59; Cannataci Report, ¶ 39. But see La Rue Report 2013 (which does not state that bulk surveillance is per se incompatible with the ICCPR).

81 Scheinin Report I 2009, ¶¶ 37, 74.

82 See, e.g., D. Feinstein, “Feinstein Statement on Intelligence Collection of Foreign Leaders,” (October 28, 2013); Z. Carpenter, “Can Congress Oversee the NSA?” The Nation, January 30, 2014; House of Commons Home Affairs Committee [United Kingdom], Seventeenth Report: Counterterrorism, Chapter 6 (April 30, 2014).

83 “Overseas Surveillance in an Interconnected World,” 32–34.

84 E. Gotein and F. Patel, “What Went Wrong with the FISA Court,” Brennan Center for Justice (2015), 22; PCLOB Report on Section 215, 59–60; American Civil Liberties Union v. Clapper, 785 F.3d 787, 811–19 (2d Cir. 2015).

85 PCLOB Report on Section 215, 177; “What Went Wrong with the FISA Court,” 27, 29.

86 La Rue Report 2013, ¶ 81; Office of the Special Rapporteur for Freedom of Expression, Inter-American Commission on Human Rights, Freedom of Expression and the Internet, OEA/Ser.L/V/II.CIDH/RELE/INF. 11/13 (December 31, 2013) ¶ 165.

87 See, e.g., OHCHR Report, ¶ 37; Association for European Integration and Human Rights and Ekimdzhiev v. Bulgaria, ¶¶ 84, 87; Rotaru v. Romania, ¶ 59; Zakharov v. Russia, ¶¶ 258–63.

88 See OHCHR Report, ¶ 37; Scheinin Report 2010, ¶ 8.

89 See, e.g., Scheinin Report 2010, ¶ 9; Zakharov v. Russia, ¶¶ 274–81. The former UN Special Rapporteur for counterterrorism and human rights has praised the Norwegian parliamentary oversight mechanism, and the European Court has approved systems in Germany and the United Kingdom. Scheinin Report I 2009, ¶ 45; Klass and others v. Germany; Weber and Saravia v. Germany; Kennedy v. United Kingdom, ¶¶ 166–68; Szabo and Vissy v. Hungary, ¶¶ 82–83. But see La Rue Report 2013, ¶ 59 (expressing concern that the German G-10 law permits warrantless surveillance of communications by the intelligence services).

90 See, e.g., Szabo and Vissy v. Hungary, ¶¶ 82–83.

91 Zakharov v. Russia, ¶ 270.

92 OHCHR Report, ¶ 38.

93 La Rue Report 2013, ¶ 92; Inter-American Commission for Human Rights, Freedom of Expression and the Internet, ¶ 113; “Report of the Freedom Online Coalition Working Group Three, Privacy and Transparency Online,” November 2015, pp. 43–45.

94 Scheinin Report II 2009, ¶ 16; La Rue Report 2013, ¶¶ 52, 79, 84; UN Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression and the Inter-American Commission on Human Rights Special Rapporteur for Freedom of Expression, “Joint Statement on WikiLeaks” (December 21, 2010).

95 ICCPR art. 2(3); European Convention art. XX; American Convention art. 25.

96 See, e.g., General Comment No. 31, ¶¶ 8, 15–19; Velasquez Rodriguez v. Honduras, ¶¶ 174 et seq.

97 Scheinin Report II 2009, ¶¶ 10–12.

98 See, e.g., La Rue Report 2013, ¶ 82; Weber and Saravia v. Germany, ¶ 135; Zakharov v. Russia, ¶ 287. But see OHCHR Report, ¶ 40 (determining that subsequent notice is not necessary but closely related to the question of effective remedy); Tele2 Sverige AB v Post-och telestyrelsen and Secretary of State for the Home Department v. Tom Watson and Others, Eur. Ct. H.R., App. Nos. 203/15 and 698/15, ¶ 121 (December 21, 2016).

99 Zakharov v. Russia, ¶¶ 171, 298.

100 See, e.g., Association for European Integration and Human Rights and Ekimdzhiev v. Bulgaria, ¶ 102; Kennedy v. United Kingdom, ¶ 167.

101 See, e.g., Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data, Strasbourg, January 8, 1981, C.E.T.S. No. 108, in force October 1, 1985, art. 1 (“Data Protection Convention”); S. and Marper v. United Kingdom, ¶ 103; Van Hulst v. Netherlands, Comm. No. 903/1999, U.N. Doc. CCPR/C/82/D/903/1999 (November 15, 2004) ¶ 7.9; Scheinin Report II 2009, ¶ 55; General Comment No. 16, ¶ 10; Inter-American Commission on Human Rights, Freedom of Expression and the Internet, ¶¶ 138–42; Solove, “Conceptualizing Privacy.”

102 Charter of Fundamental Rights of the European Union, art. 8(1); Treaty on the Functioning of the European Union, OJ C 326, 26.10.2012, pp. 47–390, art. 16(1).

103 Scheinin Report II 2009, ¶ 12.

104 Data Protection Convention, arts. 5–8.

105 Organisation for Economic Co-operation and Development, Recommendation of the Council Concerning Guidelines governing the Protection of Privacy and Transborder Flows of Personal Data, C(80)58/FINAL, as amended on July 11, 2013 by C(2013)79 (“OECD Privacy Framework”).

106 APEC Privacy Framework, Publication APEC#205-SO-01.2 (December 2005).

107 Regulation (EU) 2016/679 of the European Parliament and of the Council of April 27, 2016, on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation), OJ L 119, 4.5.2016, pp. 1–88; European Commission, Proposal for a Regulation of the European Parliament and of the Council concerning the respect for private life and the protection of personal data in electronic communications and repealing Directive 2002/58/EC (Regulation on Privacy and Electronic Communications), COM(2017) 10 (final), January 10, 2017.

108 See Data Protection Convention, arts. 5–10; OECD Privacy Framework.

109 See, e.g., Federal Law for the Protection of Personal Data in the Possession of Private Actors (Mexico) (July 5, 2010); Law on the Protection of Personal Data (Argentina), Law 25.326 (October 30, 2000); see also “Global Trends in Privacy Protection.”

110 Digital Rights Ireland.

111 Footnote Ibid., ¶¶ 27, 58–68.

112 Tele2 Sverige, ¶ 103. Where Digital Rights Ireland dealt with the EU Data Retention Directive, Tele2 Sverige addressed domestic data-retention laws in the United Kingdom and Sweden.

113 Footnote Ibid., ¶¶ 106–111. The CJEU also held that authorities must notify individuals whose data has been retained once notification is unlikely to jeopardize the relevant investigations. Footnote Ibid., ¶ 121.

114 S. and Marper v. United Kingdom, ¶ 125. In this case, the European Court looked to the Data Protection Convention to interpret the scope of the right to privacy enshrined in Article 8 and held that the indefinite retention of biometric data of individuals suspected of committing criminal offenses was inconsistent with Article 8.

115 Google Spain SL and Google Inc. v. Agencia Española de Protección de Datos (AEPD) and Mario Costeja González, CJEU, C-131/12, ECLI:EU:C:2014:317 (May 13, 2014), ¶¶ 94, 97.

116 Civil society organizations and human rights experts have increasingly analyzed private companies’ data-collection practices in light of their responsibilities per the UN Guiding Principles on Business and Human Rights. See, e.g., Report of the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression, David Kaye, A/HRC/32/38 (May 11, 2016); Cannataci Report ¶ 46(f); Inter-American Commission on Human Rights, Freedom of Expression and the Internet, ¶ 112; “Report of the Freedom Online Coalition Working Group Three, Privacy and Transparency Online”; Ranking Digital Rights, 2015 Corporate Accountability Index (November 2015), pp. 16–18.

117 See, e.g., J. Daskal, “The Un-territoriality of Data (2015) 125 Yale Law Journal 326397.

118 See, e.g., Weber v. Saravia, ¶72 (in which the European Court declined to determine whether a complaint filed by applicants located outside of Germany alleging violations of their privacy rights by the German state was admissible ratione personae); see also Submission of Privacy International et al., OHCHR consultation in connection with General Assembly Resolution 68/167, “The right to privacy in the digital age,” April 1, 2014.

119 ICCPR art. 2(1).

120 General Comment No. 31, ¶ 10; see also Legal Consequences of the Construction of a Wall in the Occupied Palestinian Territory, Advisory Opinion, 2004 ICJ Rep. 136, ¶¶ 109–11; I/A C.H.R., Report No. 109/99, Case 10.951, Coard et al. (United States), September 29, 1999, ¶ 37.

121 Legal Consequences of the Construction of a Wall, ¶ 109.

122 See, e.g., United States Department of State, Memorandum Opinion on the Geographic Scope of the International Covenant on Civil and Political Rights, October 19, 2010; UN Human Rights Committee, “Human Rights Committee considers report of the United States,” March 14, 2014, www.ohchr.org/EN/NewsEvents/Pages/DisplayNews.aspx?NewsID=14383; Legal Consequences of the Construction of a Wall in the Occupied Palestinian Territory, ¶ 110.

123 Al-Skeini v. United Kingdom, Eur. Ct. H.R., App. No. 55721/07, ¶¶ 133–40 (Grand Chamber, July 7, 2011). Notably, the two cases before the European Court dealing with extraterritorial communications surveillance involved applicants who were both nationals and non-nationals, and the court did not address the state’s obligations to the latter. Weber and Saravia, ¶¶ 72; Liberty v. United Kingdom, Eur. Ct. H.R., App. No. 58243/00 (July 1, 2008).

124 See J. Daskal, “Extraterritorial Surveillance under the ICCPR … The Treaty Allows It!,” Just Security, March 7, 2014, www.justsecurity.org/7966/extraterritorial-surveillance-iccpr-its-allowed/.

125 See, e.g., M. Milanovic, “Human Rights Treaties and Foreign Surveillance: Privacy in the Digital Age” (2015) 56(1) Harvard International Law Journal 81146.

126 See, e.g., Letter to the Editor from M. Nowak, “What does extraterritorial application of human rights treaties mean in practice?,” Just Security, March 11, 2014, www.justsecurity.org/8087/letter-editor-manfred-nowak-extraterritorial-application-human-rights-treaties-practice/; Letter to the Editor from Former Member of the Human Rights Committee, M. Scheinin, Just Security, March 10, 2014, www.justsecurity.org/8049/letter-editor-martin-scheinin/; P. Margulies, “The NSA in Global Perspective: Surveillance, Human Rights, and International Counterterrorism” (2014) 82 Fordham Law Review 21372167 at 2148–52 (arguing that a state exercises “virtual control” over communications infrastructure when it conducts surveillance).

127 See, e.g., “The NSA in Global Perspective”; “Human Rights Treaties and Foreign Surveillance,” 118–119; see also Memorandum Opinion on the Geographic Scope of the International Covenant on Civil and Political Rights, pp. 49–50, 55–56 (arguing that a state may have obligations based on a sliding scale, and proposing that “once a state exercises authority or effective control over an individual or context, it becomes obligated to respect Covenant rights to the extent of that exercise of authority”).

128 A. Deeks, “An International Legal Framework for Surveillance” (2015) 55 Virginia Journal of International Law 251367 at 310–11; see also Daskal, “Extraterritorial Surveillance” (arguing that the conduct of surveillance on foreign nationals abroad is not covered by the ICCPR).

129 OHCHR report, ¶ 34.

130 Footnote Ibid. ¶ 36; see also La Rue Report 2013, ¶¶ 64, 87; Concluding observations on the fourth periodic report of the United States of America, U.N. Doc. CCPR/C/USA/CO/4 (April 23, 2014), ¶ 22 (maintaining that the state’s obligations in the realm of privacy rights do not differ depending on the nationality or location of the target of surveillance).

131 Presidential Policy Directive/PPD-28, Signals Intelligence Activities, The White House, January 17, 2014.

132 Regulation (EU) 2016/679, art. 3; Proposal for a Regulation on Privacy and Electronic Communications, art. 3.

133 Directive 95/46/EC of the European Parliament and of the Council of 24 October 1995 on the protection of individuals with regard to the processing of personal data and on the free movement of such data, Official Journal L 281, 23/11/1995 P. 0031–0050, arts. 25, 26(2).

134 Maximilian Schrems v. Data Protection Commissioner, CJEU, C-362/14, ECLI:EU:C:2015:650, ¶ 73 (emphasis added).

135 Regulation (EU) 2016/679, arts. 44–50.

136 Council of the European Union, “Umbrella agreement: EU ready to conclude with the US,” December 2, 2016, www.consilium.europa.eu/en/press/press-releases/2016/12/02-umbrella-agreement/.

137 A. Hern, “Google says non to French demand to expand right to be forgotten worldwide,” The Guardian, July 30, 2015.

138 A. Hern, “Google takes right to be forgotten battle to France’s highest court,” The Guardian, May 19, 2016.

139 Schrems, ¶ 93. The CJEU also found insufficient evidence that Europeans could obtain an effective remedy for violations of their privacy rights. Footnote Ibid. ¶ 95.

141 Digital Rights Ireland v. Commission, T-670/16, Action brought September 16, 2016.

142 See, e.g., E. Kosinski and S. Asayama, “Transfer of Personal Data under Japan’s Amended Personal Information Protection Act,” White and Case Technology Newsflash, October 13, 2015; P. A. Palazzi, “New Draft of Argentine Data Protection Law Open for Comment,” International Association of Privacy Professionals, February 8, 2017; “Brazil Releases Draft Data Protection Bill,” Hunton and Williams Privacy and Information Security Law Blog, February 6, 2015.

143 Daskal, “The Un-territoriality of Data” at 365–70, 373–78; J. Daskal, “Law Enforcement Access to Data Across Borders: The Evolving Security and Rights Issues” (2016) 8 Journal of National Security Law & Policy 473501.

144 In re Warrant to Search a Certain Email Account Controlled and Maintained by Microsoft Corp., Case 14–2985, Document 286–1 (2d Cir. July 14, 2016).

145 Investigatory Powers Act, §§ 41–43, 85, 52, 126–27, 149, 168–69, 190.

146 “The Un-territoriality of Data,” 389–95; “Law Enforcement Access to Data Across Borders,” 487–91.

147 “The Un-territoriality of Data,” 395.

148 B. Smith, “Time for an international convention on government access to data,” Microsoft Corporate Blogs, January 20, 2014.

149 Daskal, “Law Enforcement Access to Data Across Borders: The Evolving Security and Rights Issues” at 492–94.

11 Human Rights and Private Actors in the Online Domain

1 Report of the Special Rapporteur on the Promotion and Protection of the Right to Freedom of Opinion and Expression, David Kaye, ¶ 11, U.N. Doc. A/HRC/29/32 (May 22, 2015) (“2015 Kaye Report”).

2 M. Ammori, “The ‘New’ New York Times: Free Speech Lawyering in the Age of Google and Twitter” (2014) 127 Harvard Law Review 2259–94 at 2266.

3 Footnote Ibid., at 2267.

4 “Number of monthly active Facebook users worldwide as of 2nd quarter 2016,” Statista, www.statista.com/statistics/264810/number-of-monthly-active-facebook-users-worldwide/.

5 C. Smith, “100 Google Search Statistics and Fun Facts,” DMR, http://expandedramblings.com/index.php/by-the-numbers-a-gigantic-list-of-google-stats-and-facts/.

6 Yochai Benkler argues that the Internet and the networked information economy provide us with distinct improvements in the structure of the public sphere over mass media. This is due to the information produced by and cultural activity of non-market actors, which the Internet enables, and which essentially allow a large number of actors to see themselves as potential contributors to public discourse and potential actors in political arenas. Y. Benkler, Wealth of Networks: How Social Production Transforms Markets and Freedom (New Haven, CT: Yale University Press, 2006), p. 220. “The network allows all citizens to change their relationship to the public sphere. They no longer need to be consumers and passive spectators. They can become creators and primary subjects. It is in this sense that the internet democratizes.” Footnote Ibid., p. 272.

7 J. M. Balkin, “Old-School/New-School Speech Regulation” (2014) 127 Harvard Law Review 2296–342.

8 This includes groups and networks such as the Electronic Privacy Information Center (US), the Electronic Frontier Foundation (US), Privacy International (UK/global), European Digital Rights (European), Access Now(US), and the Association for Progressive Communications (global).

9 A key issue in the human rights context may be that content with historical or legal value – e.g., information that may serve as evidence of a human rights violation or war crime – is taken down for violation of terms of service or community standards.

10 In the United States, the Federal Trade Commission has focused on Internet platforms on several occasions. For example, since 2011, Facebook Inc. has been under a consent order by the FTC for deceiving consumers by telling them they could keep their information on Facebook private and then repeatedly allowing it to be shared and made public. The order requires that Facebook obtain periodic assessments of its privacy practices by independent third-party auditors for the next twenty years. For more information on the case, please refer to “Facebook, Inc.,” Federal Trade Commission, www.ftc.gov/enforcement/cases-proceedings/092-3184/facebook-inc. In Europe, the Dutch Data Protection Authority (DPA) imposed an incremental penalty payment on Google in 2013 based on practices introduced with Google’s privacy policy in 2012. According to the DPA, Google combines the personal data collected by all kinds of different Google services without adequately informing users in advance and without asking for their consent. In July 2015, the DPA announced that Google had revised its privacy policy following the demands of the DPA, and that Google had until the end of December 2015 to obtain the unambiguous consent of all of its users at each step. For more information on the case, please refer to “Dutch DPA: privacy policy Google in breach of data protection law,” Autoriteit Persoongegevens, https://cbpweb.nl/en/news/dutch-dpa-privacy-policy-google-breach-data-protection-law.

11 E. B. Laidlaw, Regulating Speech in Cyberspace: Gatekeepers, Human Rights and Corporate Responsibility (Cambridge: Cambridge University Press, 2015).

12 Z. Tufekci, “Facebook: The Privatization of our Privates and Life in the Company Town,” Technosociology: our tools, ourselves, http://technosociology.org/?p=131.

13 T. L. Gillespie, “The Politics of Platforms” (2010) 12 New Media & Society 347–64 at 347.

14 In Consent of the Networked: The World-Wide Struggle for Internet Freedom (New York: Basic Books, 2012), p. 150, Rebecca MacKinnon refers to Facebook’s “digital kingdom” as Facebookistan. In Foreign Policy, she further argues that “Facebook is not a physical country, but with 900 million users, its ‘population’ comes third after China and India. It may not be able to tax or jail its inhabitants, but its executives, programmers, and engineers do exercise a form of governance over people’s online activities and identities.” R. MacKinnon, “Ruling Facebookistan,” Foreign Policy, June 14, 2012, http://foreignpolicy.com/2012/06/14/ruling-facebookistan/; see also A. Chander, “Facebookistan” (2012) 90 North Carolina Law Review 1807–42.

15 Ammori, “The ‘New’ New York Times” at 2263.

16 J. Rosen, “The Deciders: The Future of Privacy and Free Speech in the Age of Facebook and Google” (2012) 80 Fordham Law Review 1525–38 at 1536.

17 Ammori, “The ‘New’ New York Times” at 2265.

19 B. Stone, “The Tweets Must Flow,” Twitter, January 28, 2001, https://blog.twitter.com/2011/tweets-must-flow.

21 See Balkin, “Old-School/New-School Speech Regulation” at 2296–342.

22 Global Network Initiative, www.globalnetworkinitiative.org.

23 A transparency report discloses statistics related to government requests for user data or content over a certain period of time. Google was the first online platform to publish a transparency report in 2010, with Twitter following in 2012.

24 Ranking Digital Rights published its first annual Corporate Accountability Index in November 2015. The index ranks sixteen Internet and telecommunication companies according to thirty-one indicators, focused on corporate disclosure of policies and practices that affect users’ freedom of expression and privacy. Ranking Digital Rights, https://rankingdigitalrights.org.

25 Examples include the US Federal Trade Commission (FTC) investigation into Google’s practices in connection with its YouTube Kids app (2015), the FTC Consent Order on Facebook (2011), the Dutch Data Protection Authority case against Google (2013), the Austrian class action privacy lawsuit against Facebook (rejected by the Austrian Court in July 2015 due to the lack of jurisdiction), the Google/Spain ruling of the European Court of Justice (2014), the Belgian Privacy Commissioners’ recommendations to Facebook (2015), the Irish Data Protection Authority’s audit of, and recommendations to, Facebook (2011), the European Union’s antitrust case against Google (2015), and the Article 29 Working Party’s examination of Google’s Privacy Policy (2012).

26 At the first UN World Summit on the Information Society (WSIS), held in two phases in 2003 and 2005, it was confirmed that international human rights law serves as the baseline for information and communications technology (ICT)-related policy. Since WSIS, UN agencies such as the International Telecommunication Union, the United Nations Development Programme, and UNESCO have been responsible for follow-up action to ensure that the WSIS vision is implemented. This implementation process was reviewed in December 2015. World Summit on the Information Society, www.itu.int/wsis/review/2014.html.

27 Human Rights Council, Res. 20/8, The Promotion, Protection and Enjoyment of Human Rights on the Internet, U.N. Doc. A/HRC/RES/20/8 (July 16, 2012); Human Rights Council Res. 26/13, The Promotion, Protection and Enjoyment of Human Rights on the Internet, U.N. Doc. A/HRC/RES/26/13 (July 14, 2014); “The right to privacy in the digital age,” U.N. Doc. A/HRC/27/37 (June 30, 2014).

28 Report of the Special Rapporteur on the Promotion and Protection of the Right to Freedom of Opinion and Expression, Frank La Rue, U.N. Doc. A/HRC/17/27 (May 16, 2011) (“2011 La Rue Report”); Report of the Special Rapporteur on the Promotion and Protection of the Right to Freedom of Opinion and Expression, Frank La Rue, U.N. Doc. A/HRC/23/40 (April 17, 2013) (“2013 La Rue Report”); 2015 Kaye Report.

29 2011 La Rue Report.

30 2013 La Rue Report.

31 2011 La Rue Report, ¶ 44.

32 R. F. Jørgensen (ed.), Human Rights in the Global Information Society (Cambridge, MA: MIT Press, 2006); C. Garipidis and N. Akrivopoulou, Human Rights and Risks in the Digital Era: Globalization and the Effects of Information Technologies (Hershey, PA: Information Science Reference, 2012); W. Benedek and R. Madanmohan, “Human Rights and Information and Communication Technology – Background Paper,” in Proceedings of the 12th Informal Asia-Europe Meeting (ASEM) Seminar on Human Rights (Singapore: Asia-Europe Foundation, 2013), Seoul, Republic of Korea, June 27–29, 2012, pp. 3487; D. Korff, The Rule of Law on the Internet and in the Wider Digital World – Issue Paper for the Council of Europe (Strasbourg: Council of Europe, 2014); R. F. Jørgensen, Framing the Net: The Internet and Human Rights (Cheltenham: Edward Elgar Publishing, 2013); C. Padovani, F. Musiani, and E. Pavan, “Investigating Evolving Discourses on Human Rights in the Digital Age: Emerging Norms and Policy Challenges” (2010) 72 International Communication Gazette, 45, 359–78; L. Horner, D. Hawtin, and A. Puddehatt, Directorate-General for External Policies of the Union Study, “Information and Communication Technologies and Human Rights,” EXPO/B/DROI/2009/24 (June 2010).

33 Laidlaw, Regulating Speech in Cyberspace, p. 59.

34 J. M. Balkin, “Digital Speech and Democratic Culture: A Theory of Freedom of Expression for the Information Society” (2004) 79 New York University Law Review 155 at 40.

35 Laidlaw, Regulating Speech in Cyberspace, p. 18.

36 For an elaboration of the Internet as a new kind of public sphere, please refer to Jørgensen, Framing the Net, pp. 81–106.

37 Perry Educ. Ass’n v. Perry Educators’ Ass’n, 460 U.S. 37 (1983).

38 R. Moon, “Access to State-Owned Property,” in The Constitutional Protection of Freedom of Expression (Toronto: University of Toronto Press, 2000), pp. 148–81.

39 Footnote Ibid., p. 148.

40 Stacey D. Schesser, “A New Domain for Public Speech: Opening Public Spaces Online” (2006) 94 California Law Review 1791–825 at 92.

41 Arguably, public streets and parks today are less significant than online platforms as spaces for public discourse.

42 See, e.g., “CDA 230: The Most Important Law Protecting Internet Speech,” Electronic Frontier Foundation, www.eff.org/issues/cda230.

43 Schesser, “A New Domain for Public Speech” at 99. At the other end of the spectrum are countries where the state imposes liability regimes on Internet intermediaries in order to control online content. For a global overview of such practices and their negative impact on online freedom of expression, see, for example, the global surveys presented by the OpenNet Initiative (ONI), https://opennet.net/. Please note that the ONI stopped collecting data as of December 2014.

44 This model, which builds on Karine Barzilai-Nahon’s network gatekeepers theory (K. Barzilai-Nahon, “Toward a Theory of Network Gatekeeping: A Framework for Exploring Information Control,” [2008] 59 Journal of the American Society for Information Science and Technology 1493–512), is elaborated in Laidlaw, Regulating Speech in Cyberspace, pp. 44–46.

45 Laidlaw, Regulating Speech in Cyberspace, p. 53.

46 Footnote Ibid., p. 48.

47 B. Schneier, Data and Goliath: The Hidden Battles to Capture Your Data and Control Your World (New York: W. W. Norton & Company, 2015), p. 58.

48 Footnote Ibid., pp. 60–61.

49 S. Zuboff, “Big Other: Surveillance Capitalism and the Prospects of an Information Civilization” (2015) 30 Journal of Information Technology 7589.

50 J. E. Cohen, Configuring the Networked Self: Law, Code, and the Play of Everyday Practice (New Haven, CT: Yale University Press, 2012).

51 N. Elkin-Koren, “Affordances of Freedom: Theorizing the Rights of Users in the Digital Era” (2012) 6 Jerusalem Review of Legal Studies 96109 at 97.

52 P. van Dijk et al. (eds.), Theory and Practice of the European Convention on Human Rights (Antwerp, Oxford: Intersentia, 2006), p. 6.

53 R. Deibert et al. (eds.), Access Controlled: The Shaping of Power, Rights, and Rule in Cyberspace (Cambridge, MA: MIT Press, 2010); R. Deibert et al. (eds.), Access Denied the Practice and Policy of Global Internet Filtering (Cambridge, MA: MIT Press, 2008).

54 See “OECD Guidelines for Multinational Enterprises,” Organization for Economic Co-operation and Development, http://mneguidelines.oecd.org/text/; “ILO Declaration on Fundamental Principles and Rights at Work,” International Labour Organization, www.ilo.org/declaration/lang--en/index.htm.

55 Business Leaders Initiative on Human Rights, U.N. Global Compact, and Office of the High Commissioner for Human Rights, “A Guide for Integrating Human Rights into Business Management,” (2007), p. 8, www.ohchr.org/Documents/Publications/GuideHRBusinessen.pdf. For literature on the normative grounding of CSR in the human rights discourse, see, e.g., T. Campbell, “The Normative Grounding of Corporate Social Responsibility: A Human Rights Approach,” in D. McBarnet (ed.), The New Corporate Accountability: Corporate Social Responsibility and the Law (Cambridge: Cambridge University Press, 2007). According to Campbell, human rights offers primarily a discursive rather than legal framework for CSR.

56 Report of the Special Representative of the Secretary-General on the Issue of Human Rights and Transnational Corporations and Other Business Enterprises, John Ruggie, ¶ 35, U.N. Doc. A/HRC/14/27 (April 9, 2010).

57 United Nations Global Compact, www.unglobalcompact.org/.

58 Report of the Special Representative John Ruggie, Guiding Principles on Business and Human Rights: Implementing the United Nations ‘Protect, Respect and Remedy’ Framework, U.N. Doc. A/HRC/17/31 (March 21, 2011) (“2011 Ruggie Report”).

60 2011 Ruggie Report, p. 4.

61 Footnote Ibid., pp. 17–20.

62 Footnote Ibid., p. 24.

63 Footnote Ibid., p. 11.

64 Footnote Ibid., p. 12. The International Bill of Human Rights consists of the Universal Declaration of Human Rights (1948), the International Covenant on Civil and Political Rights (1966), and the International Covenant on Economic, Social, and Cultural Rights (1966).

65 For guidance on human rights impact assessment, see, for example, “Rights and Democracy, Getting it Right: Human Rights Impact Assessment Guide,” International Centre for Human Rights and Democratic Development, http://hria.equalit.ie/en/; FIDH, “Community-based Human Rights Impact Assessments,” www.fidh.org/en/issues/globalisation-human-rights/business-and-human-rights/community-based-human-rights-impact-assessments.

66 For an elaboration of the argument see, for example, J. KnoxThe Ruggie Rules: Applying Human Rights Law to Corporations,” in R. Mares (ed.), The UN Guiding Principles on Business and Human Rights: Foundations and Implementation (Leiden, Boston: Martinus Nijhoff, 2012).

67 For an account of this development, see J. Ruggie, “Business and Human Rights: The Evolving International Agenda” (2007) 101 American Journal of International Law 819–40; Mares, UN Guiding Principles on Business and Human Rights, pp. 1–49.

68 2011 Ruggie Report, p. 1.

69 Footnote Ibid., p. 25.

70 S. Lagoutte, “The State Duty to Protect against Business-Related Human Rights Abuses: Unpacking Pillar 1 and 3 of the UN Guiding Principles on Human Rights and Business,” Working Paper, Human Rights’ Research Papers, No. 2014/1 (2014), p. 9.

71 See, e.g., Tatar v. Romania, Eur. Ct. H.R., App. No. 67021/01 (January 27, 2009); Fadeyeva v. Russia, Eur. Ct. H.R., App. No. 55723/00 (June 9, 2005); Öneryildiz v. Turkey, Eur. Ct. H.R., App. No. 48939/99 (Grand Chamber, November 30, 2004); Guerra & Others v. Italy, Eur. Ct. H.R., App. No. 14967/89 (Grand Chamber, February 19, 1998); López Ostra v. Spain, Eur. Ct. H.R., App. No. 16798/90 (December 9, 1994).

72 S. A. Aaronson and I. Higham, “‘Re-Righting Business’: John Ruggie and the Struggle to Develop International Human Rights Standards for Transnational Firms” (2013) 35 Human Rights Quarterly 333–64; D. Bilchitz, “A Chasm between ‘Is’ and ‘Ought’?: A Critique of the Normative Foundations of the SRSG’s Framework and the Guiding Principles,” in S. Deva and D. Bilchitz (eds.), Human Rights Obligations of Business: Beyond the Corporate Responsibility to Respect? (Cambridge: Cambridge University Press, 2013), pp. 107–37.

73 C. Methven O’Brien and S. Dhanarajan, “The Corporate Responsibility to Respect Human Rights: A Status Review,” Working Paper, National University of Singapore, 2015/005 (2015), 4.

74 See the presentation of the working group by the UN High Commissioner for Human Rights, www.ohchr.org/EN/Issues/Business/Pages/WGHRandtransnationalcorporationsandotherbusiness.aspx.

77 Institute for Human Rights and Business and SHIFT for the European Commission, “ICT Sector Guide on Implementing the UN Guiding Principles on Business and Human Rights,” 2012, Section 2.

78 In the ruling Editorial Board of Pravoye Delo and Shtekel v. Ukraine, Eur. Ct. H.R., App. No. 33014/05 (May 5, 2011), the European Court of Human Rights for the first time acknowledged that Article 10 imposes on states a positive obligation to create an appropriate regulatory framework to ensure effective protection of journalists’ freedom of expression on the Internet.

79 Methven O’Brien and Dhanarajan, “The Corporate Responsibility to Respect Human Rights,” 5.

81 It should be noted that informational privacy covers only one aspect of privacy.

82 Google Spain SL, Google Inc. v. Agencia Española de Protección de Datos, Mario Costeja González, CJEU, Case C-131/12 (May 13, 2014).

83 2011 La Rue Report.

84 See US Copyright Office, “The Digital Millennium Copyright Act of 1998: U.S. Copyright Office Summary,” www.copyright.gov/legislation/dmca.pdf.

85 See Schesser, “A New Domain for Public Speech”; I. Brown, “Internet Self-Regulation and Fundamental Rights,” Index on Censorship (2010), https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1539942; B. Frydman and I. Rorive, “Regulating Internet Content through Intermediaries in Europe and the USA,” (2002) 23(1) Zeitschrift für Rechtssoziologie 4159.

86 Business and Human Rights: Towards Operationalizing the “Protect, Respect and Remedy” Framework, U.N. Doc. A/HRC/11/13 (April 22, 2009), at 17.

87 S. Ard, “Mark Zuckerberg’s IPO Letter: Why Facebook Exists,” Yahoo! Finance, February 1, 2012, http://finance.yahoo.com/news/mark-zuckerberg’s-ipo-letter--why-facebook-exists.html.

88 Ammori, “The ‘New’ New York Times” at 70.

89 C. Cain Miller, “Google Has No Plans to Rethink Video Status,” The New York Times, September 14, 2012, http://perma.cc/LX2F-DKE9. Commentators have argued that Google’s philosophy likely impacts the thinking at companies across Silicon Valley, since its alumni have been shaped by shared experiences and an ongoing informal network, which shares experiences on difficult questions. Ammori, “The ‘New’ New York Times” at 69.

90 The website is available at: www.globalnetworkinitiative.org.

91 See C. M. Maclay, “An Improbable Coalition: How Businesses, Non-Governmental Organizations, Investors and Academics Formed the Global Network Initiative to Promote Privacy and Free Expression Online,” PhD thesis, Northeastern University (2014) (providing a detailed account of the formation of the GNI).

92 The GNI Principles are available at: http://globalnetworkinitiative.org/principles/index.php.

93 “The Global Network Initiative and the Telecommunications Industry Dialogue join forces to advance freedom of expression and privacy,” Global Network Initiative, www.telecomindustrydialogue.org and http://globalnetworkinitiative.org/news/global-network-initiative-and-telecommunications-industry-dialogue-join-forces-advance-freedom.

94 “Core Commitments,” Global Network Initiative, https://globalnetworkinitiative.org/corecommitments/index.php.

95 In June 2014, the GNI board consolidated the assessment process into a two-stage model: first, self-reporting from the companies to GNI after one year of membership; second, assessment of each company member every two years. The assessment is carried out by a list of GNI-approved assessors and examines the policies, systems, and procedures put in place by the company to comply with the GNI Principles.

96 MacKinnon, Consent of the Networked, pp. 179–82. For news coverage on this, see, for example, L. Downes, “Why no one will join the Global Network Initiative,” Forbes, March 30, 2011, https://www.forbes.com/sites/larrydownes/2011/03/30/why-no-one-will-join-the-global-network-initiative/#275f5878d782.

97 D. Doane, The Myth of CSR: The Problem with Assuming That Companies Can Do Well While Also Doing Good Is That Markets Don’t Really Work That Way (Stanford, CA: Stanford Graduate School of Business, 2005), pp. 2229.

98 “GNI Principles: Section on Freedom of Expression,” Global Network Initiative, http://globalnetworkinitiative.org/principles/index.php#18.

99 “GNI Implementation Guidelines: Section on Freedom of Expression,” Global Initiative Network, http://globalnetworkinitiative.org/implementationguidelines/index.php#29.

100 In 2006, the “Innocence of Muslims” video sparked outrage in countries throughout the Middle East for its perceived criticism of Islam. While YouTube allowed the video to remain online in the United States, stating that the video did not break US law, it was removed in countries where it violated local laws, as well as in Libya and Egypt, where it did not violate local laws. Commentators have argued that the case is illustrative of the way private companies carry out worldwide speech “regulation” – sometimes in response to government demands, sometimes to enforce their own terms of service. S. Benesch and R. MacKinnon, “The Innocence of YouTube,” Foreign Policy, October 5, 2012, http://foreignpolicy.com/2012/10/05/the-innocence-of-youtube/.

101 Ammori, “The ‘New’ New York Times” at 76.

102 The main channel for identifying objectionable content is user reporting enabled by technical features in the platform.

103 Cain Miller, “Google Has No Plans to Rethink Video Status.”

104 “GNI Principles: Section on Privacy,” Global Network Initiative, http://globalnetworkinitiative.org/principles/index.php#19.

105 “GNI Implementation Guidelines, Section on Privacy,” Global Network Initiative, http://globalnetworkinitiative.org/implementationguidelines/index.php#28.

106 “KU Leuven Centre For IT & IP Law and Iminds-Smit Advise Belgian Privacy Commission in Facebook Investigation,” KU Leuven, www.law.kuleuven.be/icri/en/news/item/icri-cir-advises-belgian-privacy-commission-in-facebook-investigation.

107 Commission for the Protection of Privacy, Recommendation No. 04/2015 (May 13, 2015), www.privacycommission.be/sites/privacycommission/files/documents/recommendation_04_2015_0.pdf.

108 J. Van Dijck and T. Poll, “Understanding Social Media Logic” (2013) 1 Media and Communication 214.

109 C. Methven O’Brien and S. Dhanarajan, “The Corporate Responsibility to Respect Human Rights” at 5.

111 The importance of access to remedies in an online context is stressed in the Council of Europe’s guide to human rights for Internet users. Council of Europe, “Recommendation of the Committee of Ministers to Member States on a Guide on Human Rights for Internet Users,” MSI-DUI (2013) 07Rev7 (April 16, 2014).

112 The General Data Protection Regulation (EU 2016/679) has been highly controversial and its implications widely addressed by scholars and activists alike. See, e.g., A. DixEU Data Protection Reform Opportunities and Concerns” (2013) 48 Intereconomics 268–86; D. Naranjo, “General Data Protection Regulation: Moving forward, slowly,” European Digital Rights, June 3, 2015, https://edri.org/author/diego/page/5/.

12 Technology, Self-Inflicted Vulnerability, and Human Rights

1 For a brief summary of key Snowden revelations, see Human Rights Watch and American Civil Liberties Union, With Liberty to Monitor All (2014), pp. 8–11, www.hrw.org/sites/default/files/reports/usnsa0714_ForUPload_0.pdf. In the interest of full disclosure, note that I was the researcher and author of that report. See also J. Risen and L. Poitras, “N.S.A. Collecting Millions of Faces from Web Images,” The New York Times, May 31, 2014, www.nytimes.com/2014/06/01/us/nsa-collecting-millions-of-faces-from-web-images.html (describing the collection of photographs for facial recognition purposes).

2 See, e.g., R. Gallagher, “U.K.’s Mass Surveillance Databases were Unlawful for 17 Years, Court Rules,” The Intercept, October 17, 2016, https://theintercept.com/2016/10/17/gchq-mi5-investigatory-powers-tribunal-bulk-datasets/.

3 See “World Internet Usage and Population Statistics,” Miniwatts Marketing Group, www.internetworldstats.com/stats.htm.

4 “Number of mobile phone users worldwide from 2013 to 2019 (in billions),” Statista, www.statista.com/statistics/274774/forecast-of-mobile-phone-users-worldwide/.

5 “Waiver” here is defined narrowly; it refers to discrete choices or events that remove specific, otherwise-protected information from under the umbrella of the human right to privacy (instead of a broad, blanket alienation of the right to privacy for all of one’s protected matters).

6 See, e.g., With Liberty to Monitor All; “Chilling Effects: NSA Surveillance Drives U.S. Writers to Self-Censor,” PEN American Center, https://pen.org/chilling-effects; “Surveillance Self-Defense,” Electronic Frontier Foundation, https://ssd.eff.org/en; A. Toh, F. Patel, and E. Goitein, “Overseas Surveillance in an Interconnected World,” Brennan Center for Justice, www.brennancenter.org/publication/overseas-surveillance-interconnected-world.

7 See G.A. Res. 68/167, The right to privacy in the digital age, U.N. Doc. A/Res/68/167 (December 18, 2013); G.A. Res. 69/166, The right to privacy in the digital age, U.N. Doc. A/Res/69/166 (December 18, 2014).

8 Office of the United Nations High Commissioner for Human Rights, The right to privacy in the digital age, U.N. Doc. A/HRC/27/37 (June 30, 2014).

9 See “Human Rights Council creates mandate of Special Rapporteur on the right to privacy,” Office of the United Nations High Commissioner for Human Rights, March 26, 2015, www.ohchr.org/EN/NewsEvents/Pages/DisplayNews.aspx?NewsID=15763&LangID=E.

10 See, e.g., American Civil Liberties Union, Information Privacy in the Digital Age (New York: ACLU Foundation, 2015), www.aclu.org/sites/default/files/field_document/informational_privacy_in_the_digital_age_final.pdf; G. Alex Sinha, “NSA Surveillance Since 9/11 and the Human Right to Privacy” (2014) 59 Loyola Law Review 861946.

11 See, e.g., G.A. Res. 217 (III) A Universal Declaration of Human Rights, art. 12 (December 10, 1948); Inter-Am. Comm’n H.R. 9th Conf., American Declaration of the Rights and Duties of Man, art. V (May 2, 1948); Council of Europe, European Convention on Human Rights, art. 8.

12 International Covenant on Civil and Political Rights (“ICCPR”), art. 17.

13 Two ICCPR Articles, 18 and 19, do refer to individual choice (to one’s right to choose a belief system and to choose preferred media, respectively). See ICCPR, arts. 18, 19. But those choices are essential to the rights themselves rather than related to the waiver of a covenant right. Footnote Ibid.

14 ICCPR, art. 17.

15 See, e.g., “Privacy in the Digital Age,” ¶¶ 22–23; Report of the Special Rapporteur on the promotion and protection of human rights and fundamental freedoms while countering terrorism, Martin Scheinin, ¶ 17, U.N. Doc. A/HRC/13/37 (December 28, 2009); Report of the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression, Frank La Rue, ¶ 29, U.N. Doc. A/HRC/23/4 (April 17, 2003); Report of the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression, David Kaye, ¶ 29, U.N. Doc. A/HRC/29/32 (May 22, 2015).

16 In light of the consensus coalescing among those human rights bodies, advocacy groups like the American Civil Liberties Union have attempted to map out in substantial detail the nature of state obligations under Article 17. See, e.g., “Informational Privacy in the Digital Age.”

17 See, e.g., K. L. Razzouk, “Explanation of Position on draft resolution L.26/ Rev. 1 The Right to Privacy in the Digital Age,” http://usun.state.gov/remarks/6259 (the link was active at the time of this writing).

18 See Footnote ibid.; ICCPR art. 17(1).

19 ICCPR art. 2(1).

20 While he was a legal advisor at the US State Department, Harold Koh advocated for relaxing that standard and accepting that some ICCPR obligations might attach to US conduct outside of its own territory. See Harold Hongju Koh, “Memorandum Opinion on the Geographic Scope of the International Covenant on Civil and Political Rights,” www.justsecurity.org/wp-content/uploads/2014/03/state-department-iccpr-memo.pdf. As recently as its 2015 submission to the Human Rights Committee, which monitors state compliance with the ICCPR, the United States has nevertheless continued to assert the original, narrower view. See Permanent Mission of the United States of America to the Office of the United Nations, “One-Year Follow-up Response of the United States of America to Priority Recommendations of the Human Rights Committee on its Fourth Periodic Report on Implementation of the International Covenant on Civil and Political Rights,” ¶ 33, www.state.gov/documents/organization/242228.pdf. By contrast, the United States has softened its position on the extraterritorial obligations under another human rights convention, the Convention Against Torture. White House Office of the Press Secretary, “Statement by NSC Spokesperson Bernadette Meehan on the U.S. Presentation to the Committee Against Torture,” www.whitehouse.gov/the-press-office/2014/11/12/statement-nsc-spokesperson-bernadette-meehan-us-presentation-committee-a.

21 See, e.g., Human Rights Committee, “Concluding observations on the fourth report of the United States of America,” ¶ 4, www.justsecurity.org/wp-content/uploads/2014/03/UN-ICCPR-Concluding-Observations-USA.pdf. See also Ryan Goodman, “UN Human Rights Committee Says ICCPR Applies to Extraterritorial Surveillance: But is that so novel?,” www.justsecurity.org/8620/human-rights-committee-iccpr-applies-extraterritorial-surveillance-novel/.

22 See ICCPR, arts. 19, 21, 22.

23 See, e.g., US Department of State, “Report of the United States of America Submitted to the U.N. High Commissioner for Human Rights in Conjunction with the Universal Periodic Review,” ¶ 83, www.state.gov/j/drl/upr/2015/237250.htm; “One-Year Follow-up Response,” ¶ 29.

24 Truly public information – especially information one has chosen to make public – can likely be collected without interfering with a person’s privacy, family, home, or correspondence. It is therefore difficult to see how it could fall within the scope of Article 17.

25 In the context of US constitutional law, one is protected from searches and seizures by government agents when one has a reasonable expectation of privacy. Whether the proper international law analysis deploys a similar concept, the spectrum laid out here is a useful starting point for identifying potentially relevant subjective and objective markers (such as intentions, expectations, reasonableness, and so forth). As discussed below, however, the proper approach to waiver under Article 17 is unlikely to mirror (closely, at any rate) the approach under US domestic law.

26 For particularly helpful background on these cases, see Orin S. Kerr, “The Case for the Third-Party Doctrine” (2009) 107 Michigan Law Review 561, 567–70.

27 See On Lee v. United States, 343 U.S. 747 (1952); Lopez v. United States, 373 U.S. 427 (1963); Lewis v. United States, 385 U.S. 206 (1966); Hoffa v. United States, 385 U.S. 293 (1966); United States v. White, 401 U.S. 745 (1971).

28 425 U.S. 435 (1976).

29 See Smith v. Maryland, 442 U.S. 735 (1979). Other cases in this second series include Couch v. United States, 409 U.S. 322 (1973) and United States v. Payner, 447 U.S. 727 (1980).

30 See, e.g., J. Villasenor, “What You Need to Know about the Third-Party Doctrine,” The Atlantic, www.theatlantic.com/technology/archive/2013/12/what-you-need-to-know-about-the-third-party-doctrine/282721/; C. Cohn and P. Higgins, “Rating Obama’s NSA Reform Plan: EFF Scorecard Explained,” Electronic Frontier Foundation, www.eff.org/deeplinks/2014/01/rating-obamas-nsa-reform-plan-eff-scorecard-explained.

31 See, e.g., Kerr, “The Case for the Third-Party Doctrine.”

32 Google records every search conducted on its search engine. B. Caddy, “Google tracks everything you do. Here’s how to delete it,” Wired, www.wired.co.uk/article/google-history-search-tracking-data-how-to-delete. That has driven some users to use search engines that claim they do not, such as DuckDuckGo. See DuckDuckGo, “Why You Should Care – Search History,” https://duckduckgo.com/privacy#s2.

33 See, e.g., United States v. Warshak, 631 F.3d 266 (6th Cir. 2010).

34 Some simply reject the distinction between metadata and content, such as the Office of the UN High Commissioner of Human Rights. See “The right to privacy in the digital age,” ¶ 19.

35 O. Kerr and G. Nojeim, “The Data Question: Should the Third-Party Doctrine Be Revisited?,” ABA Journal, www.abajournal.com/magazine/article/the_data_question_should_the_third-party_records_doctrine_be_revisited/.

37 Smith, 442 U.S. at 743–44.

38 “More fundamentally, it may be necessary to reconsider the premise that an individual has no reasonable expectation of privacy in information voluntarily disclosed to third parties. This approach is ill suited to the digital age, in which people reveal a great deal of information about themselves to third parties in the course of carrying out mundane tasks.” United States v. Jones, 132 S. Ct. 945, 957, 181 L. Ed. 2d 911 (2012) (Sotomayor J. concurring) (internal citations omitted).

39 D. Andreatta, “As pay phones vanish, so does lifeline for many,” USA Today, www.usatoday.com/story/news/nation/2013/12/17/pay-phone-decline/4049599/.

40 See Frank La Rue, “Report of the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression,” ¶¶ 78, 85, www2.ohchr.org/english/bodies/hrcouncil/docs/17session/A.HRC.17.27_en.pdf. Some have interpreted this report as suggesting that access to the Internet is itself a human right. See, e.g., D. Kravets, “U.N. Report Declares Internet Access a Human Right,” Wired, www.wired.com/2011/06/internet-a-human-right/.

41 Richard Stallman has discussed a similar issue in the specific context of software selection. See R. Stallman, “National Institute of Technology – Trichy – India – 17 February 2004,” Free Software Foundation, www.gnu.org/philosophy/nit-india.en.html.

42 Whether that ignorance is justifiable may depend on a case-by-case analysis that considers the sophistication of the user and the nature of the technology at issue.

43 See FBI National Press Office, “Update on Sony Investigation,” www.fbi.gov/news/pressrel/press-releases/update-on-sony-investigation; M. Riley et. al., “Missed Alarms and 40 Million Stolen Credit Card Numbers: How Target Blew It,” Bloomberg, www.bloomberg.com/news/articles/2014-03-13/target-missed-warnings-in-epic-hack-of-credit-card-data; S. Thielman, “Yahoo hack: 1bn accounts compromised by biggest data breach in history,” The Guardian, www.theguardian.com/technology/2016/dec/14/yahoo-hack-security-of-one-billion-accounts-breached; K. Zetter, “Hackers Finally Post Stolen Ashley Madison Data,” Wired, www.wired.com/2015/08/happened-hackers-posted-stolen-ashley-madison-data/.

44 E. Fink and L. Segall, “Government workers cope with fallout from Ashley Madison hack,” CNN, http://money.cnn.com/2015/08/22/technology/ashley-madison-hack-government-workers/.

45 See With Liberty to Monitor All, p. 34.

46 Free software is also available for encrypting e-mails, but e-mail encryption remains notoriously clunky.

47 Leaving aside the information disseminated by Snowden, the NSA also recently had exploits stolen by hackers. E. Nakashima, “Powerful NSA hacking tools have been revealed online,” The Washington Post, www.washingtonpost.com/world/national-security/powerful-nsa-hacking-tools-have-been-revealed-online/2016/08/16/bce4f974-63c7-11e6-96c0-37533479f3f5_story.html. Separately, CIA Director John Brennan apparently had his private e-mail hacked by a teenager. K. Zetter, “Teen Who Hacked CIA Director’s Email Tells How He Did It,” Wired, www.wired.com/2015/10/hacker-who-broke-into-cia-director-john-brennan-email-tells-how-he-did-it/.

48 With Liberty to Monitor All, p. 7.

50 The report details some of their challenges in greater detail. See Footnote ibid., pp. 22–48.

51 One major challenge is making initial contact with a source without using e-mail or a phone. First there is a practical problem: you must find the source’s precise physical location, which can be prohibitive if he or she is not nearby. Second, many sources do not take kindly to being accosted by a journalist they may not know seeking to develop a relationship they may have reservations about. On the other hand, using e-mail or phone, even with security measures in place, will nearly always leave a link between the journalist and the source that can be discovered later. See Footnote ibid.

52 Air-gapped computers are computers that never connect to any insecure network (including the Internet), often configured to sit in a secure room. One journalist equated them to electronic typewriters. See With Liberty to Monitor All, p. 32.

53 See D. Leinwand Leger and Y. Alcindor, “Petraeus and Broadwell used common e-mail trick,” USA Today, www.usatoday.com/story/tech/2012/11/13/petraeus-broadwell-email/1702057/.

54 With Liberty to Monitor All, p. 39.

55 Footnote Ibid., pp. 49–65.

56 See American Bar Association, “Model Rules of Professional Conduct: Rule 1.6: Confidentiality of Information,” www.americanbar.org/groups/professional_responsibility/publications/model_rules_of_professional_conduct/rule_1_6_confidentiality_of_information.html. Some legal experts anticipate that failure to use secure technologies to store or communicate confidential information may soon become grounds for sanctions. With Liberty to Monitor All, pp. 58–59.

57 It is noteworthy, for example, that the ten reports the first UN Special Rapporteur on privacy is scheduled to produce between 2017 and 2021 do not appear designed to address this subject at all. See Office of the High Commissioner for Human Rights, “Planned Thematic Reports and call for consultations,” www.ohchr.org/EN/Issues/Privacy/SR/Pages/ThematicReports.aspx.

58 Vienna Convention on the Law of Treaties, art. 31.

59 Footnote Ibid., art. 32. To the extent that this chapter has focused on the United States, it is significant that the US accepts the Vienna Convention as informing interpretation of its treaty obligations. See E. Criddle, “The Vienna Convention on the Law of Treaties in U.S. Treaty Interpretation” (2004) 44 Virginia Journal of International Law 431, 443.

60 E. Bjorge, The Evolutionary Interpretation of Treaties (Oxford: Oxford University Press, 2014), p. 189.

61 Footnote Ibid., p. 190.

62 The fact that someone has made correspondence public may, of course, bear on the question of whether interference with that correspondence is arbitrary or unlawful.

63 ICCPR, preamble.

64 Numerous opinion polls have been taken since the Snowden revelations, which appear to show that a majority of Americans continue to value privacy in a variety of contexts, even as they are willing to permit certain intrusions for the sake of protecting national security. See, e.g., M. Madden and L. Rainie, “Americans’ Attitudes About Privacy, Security and Surveillance,” Pew Research Center, www.pewinternet.org/2015/05/20/americans-attitudes-about-privacy-security-and-surveillance/; University of Southern California Annenberg School for Communication and Journalism, “Is online privacy over?,” http://annenberg.usc.edu/news/around-usc-annenberg/online-privacy-over-findings-usc-annenberg-center-digital-future-show; L. Cassani Davis, “How Do Americans Weigh Privacy Versus National Security?,” The Atlantic, www.theatlantic.com/technology/archive/2016/02/heartland-monitor-privacy-security/459657/; L. Rainie and M. Duggan, “Privacy and Information Sharing,” Pew Research Center, www.pewinternet.org/2016/01/14/privacy-and-information-sharing/.

65 United States v. Jones, 132 S. Ct. 945, 962 (2012) (Alito J., concurring).

66 See ICCPR, art. 2(1).

67 See, e.g., U.N. Office of the High Commissioner for Human Rights, “International Human Rights Law,” www.ohchr.org/EN/ProfessionalInterest/Pages/InternationalLaw.aspx.

68 ICESCR, art. 12. The right is also suggested in the Universal Declaration of Human Rights, although somewhat less directly. See UDHR, art. 25.

69 Note that the United States has not ratified the ICESCR, and therefore the right to health does not exert the same binding force on the US as the right to privacy does.

70 UN Office of the High Commissioner for Human Rights and World Health Organization, “The Right to Health: Fact Sheet No. 31,” at 5, www.ohchr.org/Documents/Publications/Factsheet31.pdf.

71 States would also have negative obligations with respect to the right to health, such as to refrain from directly undermining the health of their populations (for example, by stripping large segments of the population of health insurance or polluting the drinking water).

72 See Footnote ibid., at 3.

74 There is something of an asymmetry between the right to privacy and the right to health, in that it may be less likely that a state would seek to limit the latter to advance an alternate interest, like national security.

75 See J. Comey, “Encryption, Public Safety, and ‘Going Dark’,” Lawfare, www.lawfareblog.com/encryption-public-safety-and-going-dark; T. Schleifer, “FBI director: We can’t yet restrain ISIS on social media,” CNN, www.cnn.com/2015/06/18/politics/fbi-social-media-attacks/.

76 D. Froomkin and J. McLaughlin, “Comey Calls on Tech Companies Offering End-to-End Encryption to Reconsider ‘Their Business Model’,” The Intercept, https://theintercept.com/2015/12/09/comey-calls-on-tech-companies-offering-end-to-end-encryption-to-reconsider-their-business-model/.

77 J. McLaughlin, “Spy Chief Complains That Edward Snowden Sped Up Spread of Encryption by 7 Years,” The Intercept, https://theintercept.com/2016/04/25/spy-chief-complains-that-edward-snowden-sped-up-spread-of-encryption-by-7-years/.

78 J. McLaughlin, “Obama Wants Nonexistent Middle Ground on Encryption, Warns Against ‘Fetishizing Our Phones’,” The Intercept, https://theintercept.com/2016/03/11/obama-wants-nonexistent-middle-ground-on-encryption-warns-against-fetishizing-our-phones/.

79 See “Exhibit B: Minimization Procedures Used by the National Security Agency in Connection with Acquisitions of Foreign Intelligence Information Pursuant to Section 702 of the Foreign Intelligence Surveillance Act of 1978, as Amended,” at 9, www.aclu.org/files/assets/minimization_procedures_used_by_nsa_in_connection_with_fisa_sect_702.pdf.

80 See, e.g., K. Opsahl and T. Timm, “In Depth Review: New NSA Documents Expose How Americans Can Be Spied on Without a Warrant,” Electronic Frontier Foundation, www.eff.org/deeplinks/2013/06/depth-review-new-nsa-documents-expose-how-americans-can-be-spied-without-warrant.

81 See S. Sorcher and J. Eaton, “What the US government really thinks about encryption,” The Christian Science Monitor, www.csmonitor.com/World/Passcode/2016/0525/What-the-US-government-really-thinks-about-encryption.

13 The Future of Human Rights Technology A Practitioner’s View

1 C. Weeramantry, The Impact of Technology on Human Rights: Global Case-Studies (Tokyo: United Nations University Press, 1993); J. Metzl, “Information Technology and Human Rights” (1996) 18(4) Human Rights Quarterly 705–46; R. Jørgensen et al., “ICT and Human Rights” (FRAME Deliverable No. 2.3, 2015).

3 “Advancing the New Machine: A Conference on Human Rights and Technology,” UC Berkeley School of Law, www.law.berkeley.edu/research/human-rights-center/past-projects/technology-projects/advancing-the-new-machine-a-conference-on-human-rights-and-technology/.

4 “Internet Freedom Funding Opportunity: State Department’s Bureau of Democracy, Human Rights, and Labor (DRL),” Open Technology Fund, www.opentech.fund/article/internet-freedom-funding-opportunity-state-departments-bureau-democracy-human-rights-and.

5 “About the program,” Open Technology Fund, www.opentech.fund/about/program.

6 “The Access Grants Program – an emerging initiative,” Access Now, June 25, 2015, www.accessnow.org/the-access-grants-program-an-emerging-initiative/.

7 “RightsCon Summit Series, www.rightscon.org/about-and-contact/; Y. Ulman, Report on the International Conference on “Emerging Technologies and Human Rights” Council of Europe Bioethics Committee, DH-BIO, Strasbourg, 4–5 May 2015 (December 2015).

8 “History, Goals and Guiding Principles,” Internet Freedom Festival, https://internetfreedomfestival.org/history/.

9 “Remote Sensing for Human Rights,” Amnesty International USA, www.amnestyusa.org/research/science-for-human-rights/remote-sensing-for-human-rights.

10 E. Schmidt, “Google Ideas Becomes Jigsaw,” Jigsaw, February 16, 2016, https://medium.com/jigsaw/google-ideas-becomes-jigsaw-bcb5bd08c423.

11 J. Powles, “Google’s Jigsaw project has new ideas, but an old imperial mindset,” The Guardian, February 18, 2016, www.theguardian.com/technology/2016/feb/18/google-alphabet-jigsaw-geopolitical-games-technology.

12 B. Prainsack and A. Buyx, “Thinking Ethical and Regulatory Frameworks in Medicine from the Perspective of Solidarity on Both Sides of the Atlantic” (2016) 37(6) Theoretical Medicine and Bioethics 489501.

13 “Overview,” Martus, https://martus.org/overview.html.

14 “About OpenEvsys,” OpenEvsys, http://openevsys.org/about-openevsys/.

15 A. Marx and S. Goward, “Remote Sensing in Human Rights and International Humanitarian Law Monitoring: Concepts and Methods,” (2013) 103(1) Geographical Review 100–11.

16 “Case Against M. Al Mahdi,” International Criminal Court, http://icc-mali.situplatform.com/.

17 D. Whetham, “Drones to Protect,” (2015) 19(2) The International Journal of Human Rights 199210.

18 M. Doretti and C. Snow, “Forensic Anthropology and Human Rights,” in D. Steadman (ed.), Hard Evidence: Case Studies in Forensic Anthropology (Upper Saddle River, NJ: Prentice Hall, 2003) pp. 290310; S. Wagner, To Know Where He Lies: DNA Technology and the Search for Srebrenica’s Missing (Oakland: University of California Press, 2008); A. Rosenblatt, Digging for the Disappeared (Redwood City, CA: Stanford University Press, 2015), p. 1.

19 K. Kelly, What Technology Wants (New York: Penguin, 2010).

20 K. Kakaes et al., Drones and Aerial Observation: New Technologies for Property Rights, Human Rights, and Global Development: A Primer (Washington, DC: New America, 2015).

21 J. Kumagai, “9 Earth-Imaging Start-Ups to Watch,” IEEE Spectrum, March 28, 2014, http://spectrum.ieee.org/aerospace/satellites/9-earthimaging-startups-to-watch.

22 E. Higgins, “A New Age of Open Source Investigation: International Examples,” in B. Akhgar et al. (eds.), Open Source Intelligence Investigation (New York: Springer International Publishing, 2016) pp. 189196.

23 See Kelly, What Technology Wants.

24 E. Weizman, “Forensic Architecture: Violence at the Threshold of Detectability” (2015) 54(4) E-flux Journal 117.

25 Y. Bois et al., “On Forensic Architecture: A Conversation with Eyal Weizman” (2016) 156 October 115–40.

26 John McCarthy et al.A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence, August 31, 1955” (2006) 27(4) AI Magazine 12.

27 C. Moyer, “How Google’s AlphaGo Beat a Go World Champion,” The Atlantic, March 28, 2016.

28 The author is program manager and co-founder of the Technology Program at the Center for Human Rights Science, Carnegie Mellon University.

29 Jay D. Aronson, Shicheng Xu, and Alex Hauptmann, “Video analytics for conflict monitoring and human rights documentation” (2015).

30 S. Nakamoto, “Bitcoin: A Peer-to-Peer Electronic Cash System,” https://bitcoin.org/bitcoin.pdf.

31 L. Literak, “Bitcoin dosáhl parity s dolarem,” AbcLinuxu, February 22, 2014, www.abclinuxu.cz/zpravicky/bitcoin-dosahl-parity-s-dolarem.

32 “History of bitcoin,” Wikipedia, https://en.wikipedia.org/wiki/History_of_bitcoin.

33 Video Vault, www.bravenewtech.org/.

34 A trusted time stamp is a form of proof of existence that relies on a trusted third party to create and maintain a hash of a file to certify that a particular asset existed at a given time. A hash is a unique alphanumeric string created from the digital file that is time-stamped using cryptography and allows tracking of the creation and modification of a file.

35 T. Levitt, “Blockchain technology trialled to tackle slavery in the fishing industry,” The Guardian, September 7, 2016, www.theguardian.com/sustainable-business/2016/sep/07/blockchain-fish-slavery-free-seafood-sustainable-technology.

36 Humanitarian Blockchain, Facebook, www.facebook.com/HumanitarianBlockchain.

37 G. Zyskind and O. Nathan, “Decentralizing Privacy: Using Blockchain to Protect Personal Data,” in 2015 IEEE Security and Privacy Workshops (SPW), Washington, DC, May 21–22, 2015, pp. 180–84.

38 M. Prosser, “How a Crowd Science Geiger Counter Cast Light on The Fukushima Radioactive Fallout Mystery,” Forbes, March 10, 2016, www.forbes.com/sites/prossermarc/2016/03/10/how-a-crowd-science-geiger-counter-cast-light-on-the-fukushima-radioactive-fallout-mystery/.

39 Public Lab contributors, “Public Lab: Gulf Coast,” https://publiclab.org/wiki/gulf-coast.

40 Public Lab contributors, “Public Lab: Desktop Spectrometry Kit,” https://publiclab.org/wiki/dsk.

41 Raspberry Pi Foundation, “About Us,” www.raspberrypi.org/about/.

42 “Sales Soar and Raspberry Pi British Board Beats Commodore 64,” The MagPi Magazine, March 16, 2017, www.raspberrypi.org/magpi/raspberry-pi-sales/.

43 A. Huang and S. Cross, “Novena: A Laptop With No Secrets,” IEEE Spectrum, October 27, 2015, http://spectrum.ieee.org/consumer-electronics/portable-devices/novena-a-laptop-with-no-secrets.

44 Purism, “Discover the Librem 13,” https://puri.sm/products/librem-13/.

45 “End-to-End Encryption,” WhatsApp Blog, April 5, 2016, https://blog.whatsapp.com/10000618/end-to-end-encryption.

46 Access Now, “Encryption TK: Securing the Future of Journalism and Human Rights,” YouTube, March 20, 2014, www.youtube.com/watch?v=uxidkrhO0-0.

47 A. Hilts, C. Parsons, and J. Knockel, “Every Step You Fake: A Comparative Analysis of Fitness Tracker Privacy and Security,” Open Effect (2016).

48 A. Wang, “Can Alexa help solve a murder? Police think so – but Amazon won’t give up her data,” The Washington Post, December 28, 2016, www.washingtonpost.com/news/the-switch/wp/2016/12/28/can-alexa-help-solve-a-murder-police-think-so-but-amazon-wont-give-up-her-data/.

49 “Hacked Cameras, DVRs Powered Today’s Massive Internet Outage,” Krebs on Security, October 21, 2016, https://krebsonsecurity.com/2016/10/hacked-cameras-dvrs-powered-todays-massive-internet-outage/.

50 E. MacAskill et al., “NSA Files: Decoded: What the revelations mean for you,” The Guardian, November 1, 2013, www.theguardian.com/world/interactive/2013/nov/01/snowden-nsa-files-surveillance-revelations-decoded.

51 E. Piracés, “From Paranoia to Solidarity: Human Rights Technology in the Age of Hyper-Surveillance,” Canada Centre for Global Security Studies, March 28, 2014, www.cyberdialogue.ca/2014/03/from-paranoia-to-solidarity-human-rights-technology-in-the-age-of-hyper-surveillance-by-enrique-piraces/.

Save book to Kindle

To save this book to your Kindle, first ensure coreplatform@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×