Skip to main content Accessibility help
×
Hostname: page-component-8448b6f56d-mp689 Total loading time: 0 Render date: 2024-04-18T13:44:19.700Z Has data issue: false hasContentIssue false

Part II - Technology and Human Rights Enforcement

Published online by Cambridge University Press:  19 April 2018

Molly K. Land
Affiliation:
University of Connecticut School of Law
Jay D. Aronson
Affiliation:
Carnegie Mellon University, Pennsylvania

Summary

Type
Chapter
Information
Publisher: Cambridge University Press
Print publication year: 2018
Creative Commons
Creative Common License - CCCreative Common License - BYCreative Common License - NCCreative Common License - ND
This content is Open Access and distributed under the terms of the Creative Commons Attribution licence CC-BY-NC-ND 4.0 https://creativecommons.org/cclicenses/

Building on recent scholarship and advocacy on the transformation of human rights fact-finding in the digital era,Footnote 1 Part II considers the opportunities and challenges presented by the use of new technologies to enforce human rights. In Chapter 6, “The Utility of User-Generated Content in Human Rights Investigations,” Jay Aronson addresses the integration of user-generated content in efforts to hold human rights violators accountable. Using a series of case studies, Aronson demonstrates how this evidence can be integrated into human rights investigations and why investigators must be careful when doing so. In Chapter 7, “Big Data Analytics and Human Rights: Privacy Considerations in Context,” Mark Latonero analyzes the privacy risks of using large aggregated datasets in human rights monitoring and argues for the development of better normative standards to protect privacy in the process.

While the contributions by Aronson and Latonero are primarily concerned with the collection of information for accountability efforts, the other two chapters in this part address the impact of technological advances on the display and use of information in advocacy contexts. Chapter 8, “The Challenging Power of Data Visualization for Human Rights Advocacy,” by John Emerson, Margaret L. Satterthwaite, and Anshul Vikram Pandey, considers the use of new data visualization techniques to communicate and analyze human rights problems. They discuss both the historical evolution of data visuals in advocacy and the risks and benefits of using data visualization in this context. Chapter 9, “Risk and the Pluralism of Digital Human Rights Fact-Finding and Advocacy,” by Ella McPherson, draws on field research with human rights organizations to address the way in which these organizations manage risk associated with the introduction of new technologies. She confronts the reality that not all organizations are equally equipped to manage these risks, and she suggests that unless this is addressed, it could have negative impacts on human rights advocacy in the long term.

The chapters in Part II make clear that one of the most significant challenges in regulating the human rights impacts of technology is that the very same characteristics of technology that present the greatest opportunities also create the greatest risks. For example, the increasing availability of low-cost technology to document abuses means more documentation by human rights organizations, bystanders, victims, and even perpetrators. At the same time, more documentation of violations means the generation of greater quantities of data, leading to significant challenges in collecting, sorting, and storing this information. Crucial evidence may also be lost or collected in ways that render it inadmissible in later proceedings to hold perpetrators accountable or unverifiable for use in historical clarification or transitional justice efforts. Video of human rights violations, whether created and shared by bystanders, victims, or perpetrators, can enhance the efficacy of legal tribunals and other accountability mechanisms, but such video also raises a host of legal and ethical challenges regarding ownership of content and concerns associated with making such material public.

Similarly, greater participation in documentation efforts by nonprofessionals could yield democratizing effects for human rights advocacy and bolster the legitimacy of the human rights project, which is often critiqued as elitist.Footnote 2 The reduced role for gatekeepers, however, makes verification of information and protection of victims and witnesses much more challenging.Footnote 3 As compared to human rights researchers, nonprofessionals engaging in human rights research may have a much higher tolerance for risk, which can have significant implications for victim and witness safety. The persistence of digital information can also frustrate traditional understandings of the right to privacy and undermine efforts to ensure the informed consent of witnesses who share information about human rights abuses.

Even the solutions that technology offers to address some of these challenges can create new ones. As Aronson’s chapter makes clear, the ability of human rights practitioners to gather information about victims of human rights violations from user-generated content increases the likelihood that justice and accountability institutions will hear their cases. At the same time, such information gathering can also expose the creator and those portrayed digitally to discovery or harassment by perpetrators and their allies. Similarly, as the chapter by Emerson, Satterthwaite, and Pandey illustrates, data visualization can be a powerful tool for understanding and communicating about human rights violations, but it can just as easily obscure or fundamentally misrepresent the details of a complex situation.

Moreover, any democratizing potential that technology might have can be undermined by broad disparities in its distribution, which poses the risk of reinforcing rather than challenging the status quo. Until global power dynamics around technological innovation are changed, these resources will remain unevenly distributed. Some of the most innovative tools, such as satellite imagery, statistical methodology, and sophisticated data analysis techniques, are out of reach for many grassroots organizations, both financially and in terms of expertise. Local groups do not have the resources they need to use technology effectively or safely in their work, and more powerful groups may appropriate the documentation they produce without providing any direct benefit in return. The difficulty of maintaining good digital security is in part a product of poor technological design, but it also reflects preexisting power imbalances and the absence of funders that support the development of technological capacity among small organizations. McPherson argues, in turn, that this unevenly distributed capacity of human rights organizations to manage the risk introduced by new technologies and methodologies is likely to have a negative impact on human rights pluralism – and on human rights.

The chapters in Part II thus question not only the democratizing possibilities of technology, but also its purported objectivity or neutrality. This inquiry can be applied to the design of technology itself. It can also be applied to the activities and processes deployed through or engendered by the physical artifacts of technology, as well as the expertise employed to create it.Footnote 4 In the context of big data, for example, this includes the creation of algorithms that make a variety of decisions about information contained in large datasets, including prioritizing or classifying information or creating associations within it.Footnote 5 When governments, nongovernmental agencies, or advocacy groups subject data to new forms of analysis, they can introduce algorithmic bias into social and political decision-making. Recent academic and advocacy work has shown the limits of objectivity in data analysis, with respect to both the messiness of real-world data and the fact that an algorithm is just a set of instructions written by humans (who are often prone to bias) to be followed by a computer processor.

Indeed, the destabilizing introduction of new technologies reveals pressures on the idea of “truth” that we often try to ignore. Data visualization techniques, for example, can portray ostensibly “true” material in biased or misleading ways. Much of this indeterminacy is a function of the role of interpretation and perception. Information does not exist in a vacuum, but is constantly interpreted and reinterpreted by its audience, which “might variously refuse, resist, or recode those materials for their own purposes.”Footnote 6 In an era of “fake news,” with heightened pressure on social media companies to remove “false” material from their sites,Footnote 7 understanding the impact of technological developments on concepts of truth is crucial.Footnote 8

All of these potential risks and rewards exist in an environment in which cultural factors, convenience, government aid agencies, technology companies, and human rights funders are encouraging technological solutions to human rights documentation and advocacy problems. Human rights advocates and organizations that might benefit from avoiding new technologies until they develop better mechanisms to cope with risks may feel compelled to adopt new technologies so they can continue to be relevant in the field. In his reflection that concludes this volume (Chapter 13), Enrique Piracés picks up on the theme. He worries that the lure of technological solutions also risks focusing our attention on documentation as an end in and of itself, rather than as part of a larger response. Documentation is important, and in some instances documenting a violation may be the only response possible. There may even be a moral obligation to document, albeit one that must be balanced with security and other concerns. Nonetheless, the seduction of “new technology” should not lead us to overemphasize investigation at the expense of other responses, like transitional justice efforts, legislative reform, or community mobilization.

6 The Utility of User-Generated Content in Human Rights Investigations

Jay D. Aronson

Over the past decade, open source, user-generated content available on social media networks and the Internet has become an increasingly important source of data in human rights investigations. Although use of this data will not always generate significant findings, the sheer volume of user-generated content means that it is likely to contain valuable information. As Craig Silverman and Rina Tsubaki note in the introduction to the Verification Handbook for Investigative Reporting, such analysis, at least with respect to journalism, is now “inseparable from the work of cultivating sources, securing confidential information and other investigative tactics that rely on hidden or less-public information.”Footnote 1

In this chapter, I will examine how firsthand video recordings of events by citizen witnesses, journalists, activists, victims, and perpetrators are being used to document, prosecute, and find remedies for human rights violations. In this context, I do not focus on why such recordings are made, but rather on the extent to which they provide a particular audiovisual perspective on conflict or human rights violations. I take it as given that most of this material will be biased in what it depicts and what it leaves out, and that all of it requires significant analysis and appraisal before being authenticated and used for investigatory purposes. I focus primarily on video recordings, whether intentionally produced for human rights documentation or not, because of their prevalence in recent human rights investigations, their rich informational content, their verifiability, and their capacity for rapid dissemination via social media. My purpose is to highlight the potential utility, obvious limitations, and significant evidentiary, legal, and ethical dangers of relying on eyewitness videos by examining cases in which they are already being used.

I The Role of User-Generated Content in Human Rights Investigations

As the nongovernmental organization WITNESS notes in its seminal 2011 report, Cameras Everywhere,

Video has a key role to play, not just in exposing and providing evidence of human rights abuses, but across the spectrum of transparency, accountability and good governance. Video and other communication technologies present new opportunities for freedom of expression and information, but also pose significant new vulnerabilities. As more people understand the power of video, including human rights violators, the more the safety and security of those filming and of those being filmed will need to be considered at each stage of video production and distribution.Footnote 2

Ultimately, WITNESS argues, the ability to access the technology, skills, and expertise needed to analyze these videos will determine “who can participate – and survive – in this emerging ecosystem of free expression.”Footnote 3

A wide range of people produce video content and share it through social media, the Internet, semiprivate communication channels like Telegram and Snapchat, or privately via e-mail or physical storage. Conflict events, protests, riots, and other similar events are increasingly being live-streamed as they happen. Some of the creators of this content have been trained in human rights documentation, while others have not. In many cases, damning video will come from the perpetrators themselves, who use the content to boast of their power and accomplishments or seek funding from sympathetic outsiders.

Courts, tribunals, truth commissions, and other fact-finding (or perhaps fact-generating) bodies, as well as journalists and human rights advocates, need to be sensitive to the wide-ranging quality, completeness, and utility of user-generated content. They cannot assume that the content was intentionally created or that the people represented in this material know that their images and activities are being stored, processed, and analyzed for human rights purposes. Extra care must be taken to ensure the privacy, security, and other basic rights of people who produce such content or appear in it. In the case of perpetrator video, they must assume that the content has public relations goals, and they must take care not to spread messages of hate or extremism. Additionally, it is crucial to keep in mind that many war crimes and human rights abuses will continue to leave few electronic traces.Footnote 4 Like all other forms of evidence, video is not a magic bullet or panacea that will put an end to atrocities. Nor does it mitigate the need for eyewitnesses and victims to provide testimony and for investigators to visit the scenes of crimes and conduct thorough investigations.Footnote 5

Nonetheless, video has potential value at every stage of a human rights investigation, whether that investigation is designed to feed into advocacy or legal proceedings.Footnote 6 Most commonly, video generates leads that can be used to start an investigation. It can also provide evidence to establish that a crime or violation occurred, or it can be used to support a particular factual finding, such as whether a particular weapon was used in a conflict or whether pollution from a particular mining site is polluting a water source. Sometimes, it can also link a particular person, group, government, or company to the violation in question.

A great deal of citizen video provides lead evidence or evidence that a crime or violation occurred. For example, it might show the aftermath of an attack or event, such as destroyed infrastructure, suffering victims, dead bodies, inhumane working conditions, or a damaged environment. Evidence linking a perpetrator to a crime or violation is often harder to come by, but the creative investigator can sometimes mine video collections for this information.Footnote 7 Videos might show the presence of particular soldiers in a place where a crime has occurred, as demonstrated by uniforms, vehicles with identifiable symbols, or traceable license plates. Further, weapons and munitions (either spent or unexploded) might be traced to a given military unit, government, or manufacturer. Videos posted to the Internet might also include scenes of government officials or other individuals spurring their followers to commit acts of violence against another group. In the nonconflict context, videos might show violations in places of employment, harassment of minority groups, or environmental harm.

As noted above, the information obtained from video can play a central role in criminal proceedings or truth commissions, but it can also contribute to advocacy campaigns aimed at naming and shaming perpetrators or seeking restitution and justice for victims. Video evidence can help combat standard state or corporate responses to allegations of abuse by making it harder to deny the existence of violations. Video can also be used to raise awareness of and elicit sympathy from those who had not previously been aware of the scope or scale of abuses in a particular context. The recent focus on police brutality against black people in the United States is one example of this phenomenon. Although governments and rights violators can deny the content of written reports or question the methodology used to generate statistical claims, video and other images require the accused to develop a different response. Visual evidence makes it much harder for violators to engage in the tactic that Stanley Cohen calls “literal denial” and requires them to provide an alternative explanation for their actions or claim that their actions were justified.Footnote 8

In a case of police or security force abuse, for example, it will be more difficult (but not impossible) for the government to claim that the use of deadly weapons was justified when video evidence suggests that a political protester or criminal suspect was not an immediate threat. In the case of a massacre caught on film, the perpetrators will have to convincingly demonstrate that the videos in question were staged or depict another time or place. Video evidence also makes it harder to engage in the standard repertoire of denial techniques used to refute historical claims; i.e., that “it happened too long ago, memory is unreliable, the records have been lost, no one will ever know what [really] took place.”Footnote 9

Video evidence also makes it harder (but, again, not impossible) to impugn the credibility of victims or witnesses who testify against perpetrators because the videos that buttress their claims can be authenticated. Using geolocation techniques developed by the video verification community, it is possible to verify the location of a filmed event by matching features using services like Google Earth, Google Street View, and satellite imagery. Additionally, it is sometimes possible to determine the date and approximate time of day using shadows, weather, sun position, and other climatic clues in the footage.Footnote 10 Information extracted from video can also be combined with, and corroborated by, forensic evidence (such as autopsy, medical, or pathology reports), other scientific evidence (such as environmental analysis), official records, media reports, intercepted communications, satellite imagery, other open source data, or expert testimony (such as weapons/munitions analysis or political analysis).Footnote 11

II The Challenge of Video Evidence in Law and Advocacy

Video evidence is, of course, not definitive and presents significant challenges for activists and advocates in legal and other contexts. Audiences might grow tired of viewing traumatic video and become desensitized to its effects. On the other hand, they might become less likely to believe accounts that are not supported by convincing audiovisual evidence. The persistent creation of falsified media by states, pranksters, and other nefarious actors could also impugn legitimate content.Footnote 12 Further, any good defense attorney, government operative, or rights violator can challenge the authenticity of the video or the interpretation of its content (e.g., that victims were indeed dead but were not killed by the claimed perpetrator, or that the victims did not heed warnings to disperse in the moments before the video was shot). They can also offer alternative explanations for what is seen (e.g., that the munitions in question were stolen from the army by a rebel force, or that anti-government forces were dressed up as army personnel). Video evidence cannot always help dispute claims made by perpetrators that their actions were justified on national or internal security grounds, that the victims posed a threat, or that the action captured on video was wrong but the matter has already been dealt with internally.

III Three Case Studies: Syria, Ukraine, and Nigeria

In order to better understand the landscape of video content, this section discusses three recent case studies in which videos were used to support human rights advocacy and accountability. These cases are not fully representative of all of the possible uses of audiovisual evidence in human rights work, but they provide geographically diverse exemplars that, when viewed together, highlight many of the important issues in this domain. They highlight the diverse ways that video content is being used in advocacy and accountability efforts; they also demonstrate the variety of conflict-related information that can be gathered from video, including the distribution of weapons systems, the relative strength of military units or protest movements, and social linkages among various actors in a conflict or political movement. These uses go beyond the kind of secondhand witnessing and visual representation of an event that have traditionally characterized video evidence and raise important questions about the representativeness and reliability of audiovisual accounts. These cases also illustrate the way in which video can become the source of other forms of evidence (e.g., forensic studies of blood stains or wounds that have not been directly examined by an expert, or ballistics reports on weapons that are seen only on video), as well as how it can complement or add credence to expert testimony about physical evidence.

A Case Study 1: Syria

The volume of video generated in the ongoing conflict in Syria may very well be the new norm. As of summer 2017, estimates suggest that more than one million conflict-related videos have been filmed there. Put another way, it would take more time to view videos of the conflict than the amount of time the conflict has actually been taking place. Most of these videos can be found on social media sites like YouTube and Live Leak. Many others are circulated through e-mail and nonpublic networks like Telegram. Perpetrators (including armed combatants and Syrian military personnel), journalists, medical professionals, and citizens caught in the middle of fighting regularly post videos, photographs, and text updates of conflict situations on Twitter, Facebook, and other social media sites.

1 Overview of Projects

Several initiatives have turned to this open source material to better understand the situation in Syria. Perhaps the most extensive of these efforts is the Carter Center’s Syria Conflict Mapping Project. Initiated in 2013, this project utilizes publicly available information to document the formation of armed groups in the country and the relationships among these groups; conflict events, including ground troop clashes, aerial bombardments, and artillery shelling; and sightings of advanced weaponry. In doing so, the Syria Conflict Mapping Project is able to better understand “the evolution of armed group relations, the geographic areas of control of the various antagonists involved, and the regional and international dimensions of the conflict.”Footnote 13 The Carter Center is well aware that this information may be false, misleading, or incomplete, so those who run the project hold “regular discussions with conflict stakeholders [including face-to-face meetings in Turkey and within Syria itself, as well as through phone, Internet, or social media conversations] in order to ensure the accuracy of information collected and gain further insights regarding conflict development.”Footnote 14

The Carter Center effort is not the only project that that has sought to map the dynamics of the Syrian conflict through user-generated content. Over the last four years, a British citizen and blogger, Eliot Higgins, has been tracking weapons depicted in social media videos in Syria and is now considered to be one of the foremost experts on the conflict. There are also various military, intelligence, and supranational efforts to monitor open source material from the Syria conflict, but their details are less well known.

2 Weapons Analysis

Both Higgins and the Carter Center have used social media content to identify weapons used in the Syrian conflict. Beginning in 2012, Higgins and a network of citizen journalists began monitoring hundreds of YouTube channels for content from a diverse array of participants in the conflict in order to document the types of weapons being displayed.Footnote 15 Through this tedious and time-consuming work, they were among the first to document the Syrian government’s use of Chinese-made cluster munitions and homemade barrel bombs in civilian areas, even though such weapons are considered illegal by most countries and the government denied their existence.Footnote 16 Higgins was also among the first to identify the funneling of weapons stockpiles from the former Yugoslavia to certain rebel groups in Syria, which led to the discovery of an arms supply network financed by the Saudi Arabian government and at least tacitly approved by the American government.Footnote 17

The Carter Center’s Syria Conflict Mapping Project has also monitored the flow of heavy and sophisticated weapons throughout the country by identifying this weaponry in social media videos. According to the project leader, Chris McNaboe, there are a variety of reasons why armed groups include information about their weapons capability in videos. Many groups boast of their capabilities (including posting videos of their sophisticated weapons) in order to intimidate enemies or convince funders and citizens to join their cause. Others are required by their weapons suppliers (especially governments) to provide proof of their use – e.g., we give you twenty rockets, you post twenty videos of them being used – to ensure that the weapons are not falling into the wrong hands.Footnote 18

In its September 2014 report, the Carter Center analyzed more than 2,500 videos that provided information about weapons, which allowed it to “gain insight into the amounts, networks, timeframes, impacts, and intentions surrounding these efforts.”Footnote 19 The report argued that that the analysis of three particular weapons – the Croatian RAK-12 rocket launcher, the Chinese HJ-8 antitank guided missile, and the American BGM-71 TOW antitank guided missile – provided valuable clues about the arming of Syrian opposition groups by Saudi Arabia and Qatar with American assistance. The report further notes that all three weapons systems seem to have been distributed on a very limited basis to select groups, but adds that that they did not stay in the hands of the intended recipients.Footnote 20

3 Network Dynamics

One of the most novel applications of user-generated content in the analysis of the Syrian conflict involves documentation of the emergence and shifting allegiances of armed groups. Beginning with the earliest defections from the Syrian Armed Forces and continuing with the formation of new armed groups to fight against Assad’s regime, a large majority of anti-Assad fighters and factions announced their intention to defect via highly stylized videos posted online to social media. Indeed, as of June 2015, the Syria Conflict Mapping Project had access to an archive of videos documenting nearly 7,000 armed group formations that had been collected and maintained by the group Syria Conflict Monitor. While there are fighters and armed groups not represented in this collection, either because videos were not located or because the groups formed without such an online announcement, the Carter Center argues that the data gathered from these formation videos present a relatively good picture of the situation on the ground “due to the fact that many of the largest and most capable armed groups operating in Syria have a strong online presence.”Footnote 21 By counting the number of people in each publicly available formation video, the Carter Center approximated between 68,639 and 85,150 fighters (it is often difficult to establish an exact count because of the low quality of some videos) across the country as of August 2013.Footnote 22

The Carter Center acknowledges that it is very difficult to know what happens with fighters and units after they form, because this information is generally not available on social media. Based on an analysis of connections within the social networks of these groups, though, Carter Center researchers do claim to be able to determine which units are becoming more or less powerful within the opposition. They also make note of which units fade or disband over time based on social media activity. The Carter Center determines relationships among existing armed groups by their connections on social media. The more connections to a particular group, the more influential it is considered to be. These connections do not necessarily imply cooperation or subordination, though, and the report calls for additional research to understand the exact nature of the linkages.

One finding made by the Carter Center was that as of late 2013, although many groups had claimed allegiance to the Free Syrian Army (FSA, the organization formed by officers who defected from the Syrian Armed Forces at the beginning of the armed conflict) through national-level “military councils,” the FSA remained largely a franchise organization, because local-level armed groups had very few direct connections with FSA leadership. “Instead,” the report notes, those local groups “have sought to support themselves, and most have established their own patronage networks and leadership structures that have served to increase factionalism of the opposition.”Footnote 23 This report also noted that social media connections demonstrated the growth of clear divisions among armed groups in particular regions, providing further evidence of factionalism on the ground. At the beginning of 2014, however, the Carter Center reported a coalescing of rebel forces on social media, presumably because of the necessity of banding together to simultaneously fight ISIS and the Syrian government. These new networks seemed to differ from previous military council-type arrangements in that they were much larger and demonstrated “a more credible commitment to integration than previous efforts.”Footnote 24

The Carter Center reported that component groups began to dismantle their own organizations’ structures in order to better integrate into the larger command structure. “As a sign of this dissolution,” the report notes, “component groups of the Islamic Front have been coordinating their imagery and public outreach via their various social media outlets. These groups now coordinate their Twitter hashtags, use a uniform profile picture for all Facebook, Twitter, and YouTube pages, and share each other’s posts.” Such coordination, however, should not imply unification, which “will prove to be more difficult than coordinating Twitter handles” or creating a few new integrated fighting units.Footnote 25 Indeed, many of these armed groups have very different understandings of the conflict, how they should be operating, and what the future of Syria ought to look like. Social media links also seem too tenuous to serve as the basis for legal claims about responsibility for war crimes and human rights abuses, given the multiplicity of reasons why one group might follow or friend another on a social media platform. Such links, however, ought to be considered valuable starting points for investigations into chains of command and strategic alliances.

B Case Study 2: Russian Intervention in Ukraine

After gaining public attention for his work on weapons systems in Syria, Eliot Higgins and his colleagues in the citizen journalism community turned their attention to Ukraine in the summer of 2014 after the downing of Malaysia Airlines Flight 17 (MH17) by what was most likely a Russian Buk missile being operated by separatists in eastern Ukraine. Since this initial foray into the Ukraine crisis, Higgins and his collaborators have devoted significant resources to countering Vladimir Putin’s claim that the conflict in Ukraine is solely a civil war and that the secessionists are merely disgruntled “people who were yesterday working down in mines or driving tractors.”Footnote 26 Higgins and his collaborators use social media and satellite imagery to track the flow of weapons and soldiers from Russia over the border into the eastern section of Ukraine, where many ethnic Russians live and where pro-Russian and secessionist feelings are strongest. This content is sourced from media produced and uploaded by Russian soldiers fighting in Ukraine and Ukrainian and Russian civilians on both sides of the war who are “posting photographs and videos of convoys, equipment, and themselves on the Internet” on global sites like Instagram, Facebook, Twitter, and YouTube as well as regional sites like VKontact.Footnote 27 They also use commercially available satellite imagery to show the movement of Russian troops and weapons to and over the border with Ukraine. Higgins and his colleagues created the website Bellingcat.com to disseminate their open source findings and share the methods they use. Bellingcat also serves as a sort of virtual community for citizen journalists who analyze open source intelligence.

The Bellingcat team’s analyses of publicly available material now holds so much weight in the policy world that the Atlantic Council relied heavily on them in its May 2015 report, entitled Hiding in Plain Sight: Putin’s War in Ukraine, a direct response to the Russian government’s demands for evidence to back up American and European accusations of its involvement in Ukraine. The title refers to the open source information that “provides irrefutable evidence of direct Russian military involvement in eastern Ukraine.”Footnote 28 This report notes that “the aspect of Russian involvement in Ukraine with the widest breadth of open source information is the movement of heavy military equipment across the border, with hundreds of videos and photographs uploaded by ordinary Russians and Ukrainians who have witnessed direct Russian support of the hostilities in eastern Ukraine.”Footnote 29 By geolocating reports of individually identifiable weapons (especially tanks, armored trucks, and mobile missile systems not known to be deployed by the Ukrainian Army), the Bellingcat team has been able to trace the flow of military equipment from Russia to separatist groups in Ukraine. The team does so by analyzing serial numbers; visible markings such as words, phrases, and graphics; paint colors; damage patterns; and other unique identifiers present on the equipment.

The Bellingcat team has also used social media postings by soldiers and citizens to pinpoint the locations of Russian military personnel setting up camps near the border with Ukraine. One such example described in the Atlantic Council report is the Kuziminsky camp, which was “established only forty-six kilometers from the Ukrainian border” in the days after the annexation of the Crimea.Footnote 30 This camp did not exist before 2014. Kuziminsky camp, the report notes, “became the site for hundreds of military vehicles, including tanks from the 5th Tank Brigade” of the Russian Army, which is normally stationed far away in Siberia.Footnote 31 According to the Atlantic Council report, equipment staged at Kuziminsky camp and elsewhere later turned up in eastern Ukraine, and Bellingcat’s analysis of artillery craters created during the conflict (using techniques modified from on-the-ground analysis methods published in US army manuals) suggests that at least some missiles were fired from the Russian side of the border near newly constructed military camps.Footnote 32 In a few cases, Russian soldiers actually filmed the launch of these weapons. Damage and craters within Ukraine can be tied directly back to these launch events through analysis of missile trajectories and geolocation of launch sites.Footnote 33 The purpose of this shelling, according to the report, was to provide cover for separatists during their offensives.

The report notes that personnel stationed at these camps were decisive in the defeat of the Ukrainian Army at Debaltseve in February 2015. While the Russian government openly acknowledges that hundreds or even thousands of Russian citizens crossed over the border into Ukraine to fight alongside local separatists, it vehemently denied that Russian soldiers had done so. As the Atlantic Council report notes, though, the mounting flow of military casualties back into Russia – revealed through monitoring of the border by the Organization for Security and Co-operation in Europe, reporting by various Western media outlets, and interviews with family members of dead soldiers by the nongovernmental organization Open Russia – contradicts this assertion.Footnote 34 Evidence from known Russian soldiers’ unauthorized posting of photographs on their social media accounts provides additional evidence of their participation in hostilities within Ukraine.Footnote 35

Perhaps the most damning claim put forth by Bellingcat was that a Russian-supplied Buk missile launcher was used by separatists when they accidentally shot down MH17. Using numerous images and videos from citizens posted to social media sites, Bellingcat investigators were able to trace the movement of a particular Russian Buk launcher (which they call “3x2” because of the identification number on its side) both through Russia to the Ukrainian border in June 2014 and through Ukraine to the likely launch site of the deadly attack in July 2014. The attack itself was established through social media images of a white smoke trail posted soon after the plane was downed; at least one of these images was verified using metadata from the original photograph as well as subsequent review of satellite imagery.Footnote 36

The Russian government went to great lengths to refute claims that it supplied the weapon used to down the plane and instead placed blame on the Ukrainian Army. As early as July 21, 2014, just four days after the MH17 was shot down, the Russian Ministry of Defense published a series of satellite images that purported to show that the actual culprit was a Ukrainian Buk launcher, not a Russian one. These satellite images were used by Russia to claim that a Buk missile launcher and three support vehicles that had previously been seen parked at a Ukrainian military base near Donetsk prior to July 17 were no longer there, and that two Buk missile launchers as well as another Ukrainian military vehicle were stationed near where MH17 was shot down on July 17.

Bellingcat undertook a forensic analysis of the satellite images using a variety of methods, including metadata analysis (which can reveal evidence of manipulation); error-level analysis (which examines compression levels throughout the image – a big difference in compression in a particular area of the image would suggest a modification was made at that location); and reference analysis (which involves comparing an image with other sources of information to determine the extent to which its contents are plausible). In this case, imagery from Google Earth was compared to the images supplied by the Russian Ministry of Defense.Footnote 37 Bellingcat investigators determined, based on an examination of vegetation patterns and the growth of an oil leak from one of the trucks located at the site, that the satellite image the Russian Ministry of Defense claimed was taken on July 17, 2014, was actually taken sometime between June 1 and June 18, 2014, and, further, that it was “highly probable” that two large banks of clouds were digitally added to this image using Adobe Photoshop CS5 software to “obscur[e] details that could have been used for additional comparisons with historical imagery.”Footnote 38

Bellingcat made similar claims about the digital manipulation of the Russian satellite images purporting to show two Ukrainian Buk missile launchers and another Ukrainian military vehicle in the area where MH17 was shot down on July 17. The report claims that all three locations that show these vehicles on the satellite imagery have a different level of compression (and thus a higher error rate) than other parts of the image. This led them to conclude that the image had been modified in some way in the regions showing the three vehicles. Additional analysis of this image based on a comparison of soil structures (i.e., patterns of vegetation and other markings that result from human activity such as agriculture) with historical Google Earth images suggests that it was taken prior to July 15, 2014. The Bellingcat investigative team ultimately concluded that “the Russian Ministry of Defense presented digitally modified and falsely dated satellite images to the international public in order to implicate the Ukrainian army in the downing of MH17.”Footnote 39 They could only make this conclusion by analyzing the satellite imagery in tandem with video and photographs found on social media, demonstrating the value that user-generated content can add to investigations of conflict and human rights violations.

C Case Study 3: Nigeria and Boko Haram

Amnesty International recently released two major reports detailing the findings of its investigations into human rights abuses and war crimes committed by Boko Haram and Nigerian security forces in northeastern Nigeria. In both reports, video played a crucial role in demonstrating that the core elements of a crime had occurred. For example, video evidence corroborated Amnesty’s claims about violations committed by Boko Haram and Nigerian forces; provided support for its conclusions that those committing the violations were under control of Boko Haram or Nigerian forces; provided evidence that neither force was protecting itself or the public from immediate harm; and provided additional context for Amnesty’s conclusions.

The first report, “Our job is to shoot, slaughter and kill”: Boko Haram’s Reign of Terror in North East Nigeria, documents Boko Haram’s utter disregard for human rights norms and international laws of war in its quest to impose its own extreme view of Islam on ever larger swaths of the country.Footnote 40 Amnesty claims that Boko Haram’s actions have led to the deaths of more than 6,800 people, destroyed the homes and livelihoods of hundreds of thousands of people, and forced well over a million people from their homes. The group directly targets all people who do not follow their interpretation of Islam (called “kuffirs,” or unbelievers), often killing them dozens at a time simply for not pledging allegiance, or destroying their homes and property. Boko Haram also makes life extraordinarily difficult for all who reside in the territories it controls. Christians and members of the Civilian Joint Task Force (JTF) are the most common victims of these attacks, but Muslims who do not subscribe to Boko Haram’s rigid interpretation of the Koran and laws of the Caliphate are also targeted. Civilian infrastructure – including schools, religious facilities, transportation systems, and places of business – is often destroyed in its military campaigns. Further, rape and sexual violence, sexual slavery, and forced marriages are all used as tools of war. There are also allegations of Boko Haram forcing children to fight on its behalf.

Many of these atrocities have been captured on video, and property destruction in Gamborou and Bama has been confirmed via satellite imagery. For example, Amnesty reports that after a failed military campaign in September 2014, Boko Haram fighters executed numerous prisoners at one of its facilities in Bama. Boko Haram later released propaganda videos of the killings, which occurred both in the prison and on a bridge in the area. Amnesty describes the first video as follows:

[A]rmed men are seen offloading approximately thirteen bound men from a truck on a bridge. The detainees are lined up in a row and, one at a time, brought to the railings. The gunmen push each detainee’s head between the railings, then they shoot the detainee in the head and tip the body into the river. The video shows eighteen men killed this way and the scene ends with more men being offloaded from a truck. The final scene shows gunmen walking through a small room with bunk beds, checking and then shooting at bodies lying on the floor. It is not possible to tell whether those on the ground were already dead.Footnote 41

In the second video, a continuation of the first, Amnesty reports:

One of the gunmen turns to the camera and explains that they are executing prisoners: “Our job is to kill, slaughter and shoot, because our promise between god and us is that we will not live with unbelievers. We are either in the grave and with unbelievers on the earth or unbelievers in the grave and us on the earth… There is either Muslim or disbeliever, it is either of the two you will be a Muslim or a non-Muslim. These ones are living under apostate government.Footnote 42

According to Amnesty, local residents, human rights researchers, and satellite imagery confirmed that the locations seen in the videos were in Bama.Footnote 43

In the second report, Stars on Their Shoulders. Blood on Their Hands, Amnesty alleges that the Nigerian military committed war crimes (and potentially crimes against humanity) in its noninternational armed conflict against Boko Haram in the northeastern part of the country.Footnote 44 The report alleges that Nigerian military violations included extrajudicial execution of more than 1,200 people, arbitrary arrests of at least 20,000 people, hundreds or more enforced disappearances, the deaths of more than 7,000 people in military custody, and “countless” acts of torture.Footnote 45 The Amnesty report relied heavily on classic human rights investigation methods, including more than 400 interviews with “victims, their relatives, eyewitnesses, human rights activists, doctors, journalists, lawyers, and military sources,” along with analysis of more than 800 leaked official government documents, including military reports and correspondence between high-level Nigerian military officials and local units stationed in the northeastern part of the country.Footnote 46 Amnesty also collected and analyzed “more than ninety videos and numerous photographs showing members of the security forces and their allied militia, the Civilian JTF, committing violations,” plus satellite imagery of attack sites and mass graves. Some of these videos were gathered from social media outlets like YouTube and others were acquired by Amnesty International through its network of local researchers and NGO partners.Footnote 47

Using this diverse array of material, Amnesty was able to corroborate victim and eyewitness statements with official government documents and video and photographic evidence, as well as provide context for this visual and textual evidence. Further, Amnesty also “shared its findings with the Nigerian authorities during dozens of meeting[s] as well as 55 written submissions, requesting information and specific action to address the violations.”Footnote 48 Ultimately, Amnesty also made the report and accompanying evidence available to the Office of the Prosecutor at the International Criminal Court, since Nigeria is a party to the Rome Statute and the ICC has jurisdiction in this case if national courts are unwilling or unable to investigate the allegations and prosecute if appropriate.

As in the MH17 investigation, video and photographic evidence was not used alone in this case. Rather, it was combined with other forms of evidence. Visual evidence provided strong corroboration of evidence drawn from witness interviews that the “young men and boys” subjected to extrajudicial killings in twenty-seven incidents in 2013 and 2014 posed no immediate danger to military personnel or the general public, thus undermining claims that the killings were justified. The victims “were not taking part in hostilities, were not carrying arms, and were not wearing uniforms, insignia or other indications that they were members of Boko Haram.”Footnote 49 In fourteen of these cases, the report notes, “military forces, sometimes in collaboration with Civilian JTF members, executed a large number of victims, at times dozens or even hundreds in one day.”Footnote 50

Video evidence shows that the victims were under the firm control of the military and Civilian JTF members. Victims were shot or had their throats slit in relatively orderly operations without any judicial hearing, filing of formal charges, access to an attorney, or even cursory investigation. Further, evaluation of eyewitness testimony and visual evidence strongly suggests that many additional deaths in detention were caused by “starvation, thirst, severe overcrowding that led to spread of diseases, torture and lack of medical attention, and the use of fumigation chemicals [to kill vermin] in unventilated cells.”Footnote 51

Video evidence also played a key role in corroborating allegations of torture and other forms of ill treatment in the context of mass arrests. The report notes that videos show

soldiers and members of the Civilian JTF beating suspects, making them lie down and walking on their backs, threatening them, humiliating them, tying their arms and making them roll in the mud, and in one case attempting to drown a suspect in a river. Several videos show soldiers loading detainees onto a military truck as if they are sand bags.Footnote 52

A key aspect of Amnesty’s argument is that the actions taken by the military and civilian JTFs appear to be similar at multiple locations across the northeastern part of the country, suggesting that there was widespread and systematic effort to target young men and boys regardless of whether they were actually Boko Haram. Most may have just seemed to be sympathetic to the cause, or were simply in the wrong place at the wrong time. The extent to which the policy was formalized cannot, of course, be deduced from video evidence, but the videos do make clear that the acts were not isolated, uncoordinated, or haphazard, but rather followed a similar pattern.Footnote 53

One of the most disturbing instances occurred after Boko Haram fighters let hundreds of detainees out of the prison at the Giwa military barracks on the morning of March 14, 2014. Although the Nigerian military put up little resistance to Boko Haram’s initial attack (and, in fact, seemed to have known about the attack in advance and cleared out the night before), they were ruthless in their efforts to seek out and recapture escapees later in the day. The military and civilian JTF conducted house-to-house searches and rounded up more than 600 escapees who had not already fled with Boko Haram fighters. Most of these individuals were extrajudicially executed during the course of the day. These events were recorded in twelve videos, some of which were shot by the perpetrators themselves, obtained by Amnesty International. The videos are described in the report as follows:

One of the videos, apparently taken by a member of the Civilian JTF with the military commander’s camera, shows 16 young men and boys sitting on the ground in a line. One by one, they are called forward and told by the military commander to lie down in front of a pit. Two or three men assist in holding the detainees. Armed soldiers and Civilian JTF members then cut the men’s throats with a blade and dump them into an open mass grave. None of the men are tied and they seem to accept their fate without resistance, but the presence of armed soldiers may have prevented them from trying to escape. They may also have been in shock. The video shows five of the men being killed in this way. The fate of the remaining detainees is not shown on the video, but eyewitness accounts confirmed that nine of them had their throats cut while the others were shot dead.

A second video featuring some of the same perpetrators, taken earlier that day at the same location, shows two other detainees digging a grave, guarded by armed men in uniforms. The soldier who, according to witness testimony, is the commander of the group then tells one of the detainees to lie down in front of the pit. Men in civilian clothes who appear to be Civilian JTF members hold his legs and head, while the commander puts his right foot on the man’s side, raises his knife, kisses it, shouts “Die hard Commando” and cuts the throat of the restrained young man. All the other soldiers and Civilian JTF members shout “Yes oga [boss], kill him.”Footnote 54

These descriptions were corroborated through numerous interviews, by matching uniforms with the battalion said to be involved in the operation (one soldier had the phrase “Borno State Operation Flush” on his uniform), and by the presence of an ID number on a weapon in one of the videos clearly linking it to the battalion in question.Footnote 55

There are, of course, significant limitations to video evidence, like all other forms of evidence in human rights investigations. Lack of records, cover-up efforts by the military, and the challenges of obtaining eyewitness testimony all make it difficult to verify exactly how many extrajudicial executions take place even when events are caught on video. While Amnesty was able to successfully document more than 1,200 killings, the organization notes that it received numerous additional reports that lacked sufficient corroborating detail to be included in its report.Footnote 56

IV Discussion

The three case studies presented above do not represent the entire spectrum of uses of video in human rights advocacy and accountability efforts. They do not illustrate the use of video to protect and promote economic, social, and cultural rights or in American or Western European contexts. The three case studies do, however, demonstrate at least some of the ways in which video can be used in human rights and conflict documentation.

Especially in the cases of Ukraine and Nigeria, video is treated as one of many forms of evidence to be corroborated and integrated into a coherent package of evidence. In these two cases, video does not stand on its own in an investigation. The Carter Center takes a somewhat bolder approach, making claims about conflict dynamics – in terms of both the size and influence of particular armed groups and the weapons they possess – based on information extracted from social media networks and social media content.

While it is impossible to definitively test the Carter Center’s claim that the online presence of armed groups is roughly similar to their actual presence (with the notable exception of the Islamic State), the Center’s analysts have strong personal networks in Syria and in neighboring Turkey and are able to take advantage of human intelligence to point out errors and omissions in their open source intelligence. Even with this confirmation, the Carter Center does not claim that its findings are completely accurate or statistically valid, only that they provide insight that can be used in conflict analysis and in decision-making concerning diplomacy and humanitarian aid. The Carter Center recognizes that definitive quantitative claims cannot be made using social media data without significant additional corroboration and statistical analyses of uncertainty. Like most other forms of evidence, social media is a convenience sample – i.e., data that is relatively easily available to the investigator, as opposed to data collected through some form of systematically randomized or cluster sampling – and they acknowledge the limitations this creates.

The Ukraine case shows that governments are increasingly paying attention to the results of open source investigations, and sometimes go to great lengths (including falsifying data) to call their legitimacy into question. At the same time, it also demonstrates that in an age of open source intelligence, even ordinary people can contribute to the monitoring of conflict and human rights violations. Data about conflict and human rights abuse that was once only available to military or intelligence officials can now be found on the same platforms where cat videos, sports highlights, and other cultural ephemera are routinely shared.

The widespread availability of this content has numerous positive aspects, but it also creates significant challenges. At the most basic level, repeated viewings of audiovisual depictions of death, abuse, and torture can have a variety of negative impacts at the individual and societal levels. Excessive viewing of such content can traumatize the human rights investigator, journalist, or even the casual viewer. I experienced this vicarious or “secondary” trauma firsthand after seeing a particular image of a dismembered child in a video that depicted the aftermath of a shelling in Syria, and I have spoken to numerous colleagues who have experienced it as well. It is increasingly becoming a topic of analysis in both journalism and human rights work.Footnote 57 Indeed, much of the impetus for the research my human rights colleagues and I are conducting on semi-automating video analysis using machine learning and computer vision comes from a desire to limit exposure to images of human suffering and brutality.

A corollary to secondary trauma is that the proliferation of horrific content on the Internet may ultimately desensitize the public to what is being portrayed. At a certain point, many people will learn to tune out the reality of what they see because it is too graphic, while a minority of the public will find some perverse pleasure in it in an exploitative and voyeuristic, but not empathetic, way. There are many websites and YouTube channels that present the violence in Syria and other countries accompanied by intense music rather than historical, cultural, or situational context.Footnote 58 More work will have to be done to determine the effects of the availability of this content on public opinion, but there is little indication that it led to an upsurge in demand for humanitarian intervention in the conflict.

Even more concerning than the potential negative impacts of video in the human rights domain is the reality that the proliferation of audiovisual evidence of war crimes and human rights abuses has not yet resulted in greater accountability, at either the domestic or international level. Few high-level government or military officials accused of war crimes or crimes against humanity have been forced to admit culpability, provide reparations, or step down from positions of power due to the existence of video evidence. Nor have corporate decision-makers been held to account for the actions of their companies on communities, environments, or workers due to damning video evidence.

But perhaps this is too much to ask of video or any other form of evidence. Achieving justice and accountability requires political will. There is already ample evidence available to convict high-level perpetrators of war crimes and human rights abuses in many violent situations around the world, but no action is taken due to geopolitical or economic realities.

That said, advocacy efforts, including naming-and-shaming campaigns, often lead to important policy change in the short term, and the preservation of relevant content allows for historical clarification and the possibility of justice and accountability in the long term.Footnote 59 One need only to look at Guatemala’s efforts to bring Efrain Rios Montt to trial for crimes committed during his presidency in 1982–83, Argentina’s recent successful conviction of the architects of its Dirty War (including the dictator at the time and his closest associates), or the Extraordinary African Chamber’s recent conviction of Chad’s ex-dictator Hissène Habré for crimes against humanity that took place in the 1980s to see that justice can emerge many years, or even decades, after crimes occur.Footnote 60 Moving forward, video – properly understood, analyzed, and contextualized – will undoubtedly play an important role in these processes.

7 Big Data Analytics and Human Rights Privacy Considerations in Context

Mark Latonero Footnote *

The technology industry has made lucrative use of big data to assess markets, predict consumer behavior, identify trends, and train machine-learning algorithms. This success has led many to ask whether the same techniques should be applied in other social contexts. It is unquestionable that new information and communication technologies bring both benefits and costs to any given domain. And so, when it comes to the human rights context, with higher stakes due to vulnerable populations, the potential risks in applying the technologies associated with big data analytics deserve greater consideration.

This chapter argues that the use of big data analytics in human rights work creates inherent risks and tensions around privacy. The techniques that comprise big data collection and analysis can be applied without the knowledge, consent, or understanding of data subjects. Thus, the use of big data analytics to advance or protect human rights risks violating privacy rights and norms and may lead to individual harms. Indeed, data analytics in the human rights monitoring context has the potential to produce the same ethical dilemmas and anxieties as inappropriate state or corporate surveillance. Therefore, its use may be difficult to justify without sufficient safeguards. The chapter concludes with a call to develop guidelines for the use of big data analytics in human rights that can help preserve the integrity of human rights monitoring and advocacy.

“Big data” and “big data analytics” are catchphrases for a wide range of interrelated sociotechnical techniques, tools, and practices. Big data involves the collection of large amounts of data from an array of digital sources and sensors. Collection often occurs unbeknownst to those who are data subjects. In big data, the subjects are the individuals creating content or emitting data as part of their everyday lives (e.g., posting pictures on social media, navigating websites, or using a smartphone with GPS tracking operating in the background). This data can be collected, processed, analyzed, and visualized in order to glean social insights and patterns. Behavioral indicators at either the aggregate or individual level can be used for observation, decision-making, and direct action.

Privacy is a fundamental human right.Footnote 1 As those in the human rights field increasingly address the potential impact of new information and communication technologies,Footnote 2 privacy is of particular concern. Indeed, as G. Alex Sinha states in Chapter 12, since the Edward Snowden revelations in 2013, “perhaps no human rights issue has received as much sustained attention as the right to privacy.” Digital technologies have called into question the traditional expectations of privacy, including the right to be free from interference with one’s privacy and control over one’s personal information, the ability to be left alone, and the right to not be watched without permission. As the technology and ethics scholar Helen Nissenbaum states, “information technology is considered a major threat to privacy because it enables pervasive surveillance, massive databases, and lightning-speed distribution of information across the globe.”Footnote 3

The emergence of details about the use of pervasive surveillance technology in the post-Snowden era has only heightened anxieties about the loss of privacy. According to a 2014 report from a UN Human Rights Council (UNHRC) meeting on the right to privacy in the digital age, the deputy high commissioner noted that

digital platforms were vulnerable to surveillance, interception and data collection… [S]urveillance practices could have a very real impact on peoples’ human rights, including their rights to privacy, to freedom of expression and opinion, to freedom of assembly, to family life and to health. In particular, information collected through digital surveillance had been used to target dissidents and there were credible reports suggesting that digital technologies had been used to gather information that led to torture and other forms of ill-treatment.Footnote 4

The UNHRC report gives examples of the risks of surveillance technologies, which expose sensitive information that can produce harms to political freedoms and physical security. The role of big data analytics in perpetuating anxieties over surveillance will be discussed later in this chapter, after highlighting the importance of understanding the human rights contexts in which big data analytics might transgress privacy norms. The chapter will first take a closer look at what is meant by privacy in relation to the technologies that comprise big data analytics in the human rights context.

I Big Data Analytics for Monitoring Human Rights: Collection and Use

Assessing the legitimate application of big data analytics in human rights work depends on understanding what the right to privacy protects. Nissenbaum’s framework of “contextual integrity” can provide a way to understand the value of privacy within the human rights context and the situations in which the use of big data might infringe this right. Contextual integrity ties “adequate protection for privacy to norms of specific contexts, demanding that information gathering and dissemination be appropriate to that context and obey the governing norms of distribution within it.”Footnote 5 Nissenbaum does not go down the perilous path of trying to define privacy in absolute terms or finding a precise legal definition against the thicket of competing legal regimes, and neither will this chapter. Instead, she discusses privacy in terms of an individual’s right to determine the flow of information about him- or herself.Footnote 6 What individuals care about, Nissenbaum argues, is that their personal information flows appropriately depending on social context. This chapter will employ similar terminology, such that privacy is violated when an individual’s information is collected, analyzed, stored, or shared in a way that he or she judges to be inappropriate.

One challenge with examining whether a specific technology may violate privacy is that technology is not a single artifact that exists by itself. Technology is a combination of sociotechnical processes. Thus, it is useful to divide the technologies that comprise big data broadly into two categories: collection and use. As Alvaro Bedoya notes, most questions dealing with data privacy and vulnerable populations focus on how data will be used rather than how it will be collected.Footnote 7 Yet collection and use are intrinsically tied together. Disaggregating these categories into their individual processes provides us with a better understanding of potential privacy concerns related to human rights monitoring.

A Collection

Discovery, search, and crawling are activities that involve finding data sources that may contain information relevant to purpose, domain, or population. Data sources can be publicly available; for example, Twitter tweets, articles on online news sites, or images shared freely on social media.Footnote 8 Other sources, such as Facebook posts, are quasi-public in that they contain data that may be intended to be accessible only to specific members of an online community with appropriate login credentials and permissions. Other data sources, such as the e-mail messages of private accounts, are not publicly searchable. Even the collection of data that is publicly available can violate the privacy expectations of Internet users whose data is being collected. These users have their own expectations of privacy even when posting on sites that are easily accessible to the public. Users may feel that their posts are private, intended only for their friends and other users of an online community.Footnote 9

Scraping involves the actual collection of data from online sources to be copied and stored for future retrieval. The practice of scraping can have an impact on individual Internet users as well. With scraping, users’ data may be collected by an entity unbeknownst to them, breaching their privacy expectations or community/social norms.

Classification and indexing involve categorizing the collected data in a structured way so it can be searched and referenced. The data can be classified according to social categories created by the data collector or holder, such as name, gender, religion, or political affiliation. Classification extends the privacy risk to individual Internet users whose data has been collected. The subjects’ data, put in a database, is now organized in a way that may not correctly represent those subjects or may expose them if the data were inadvertently released. Placing subjects’ personally identifiably data into categories that may be incorrect may cast those in the dataset in a false light.

Storing and retention of large quantities of data is becoming more prevalent as storage become less expensive. This situation means that more data can be kept for longer periods of time by more entities. A post someone may have thought was fleeting or deleted can persist in numerous unseen databases effectively in perpetuity. Storing data for long periods of time exposes users to unforeseen privacy risks. Weak information security can lead to leaks or breaches that reveal personal data to others whom either the collectors or users did not intend to inform. This could expose individuals to embarrassment, extortion, physical violence, or other harms.

B Use

Big data analytics involves deploying a number of techniques and tools designed to find patterns, behavioral indicators, or identities of individuals, groups, or populations. Structuring data, performing statistical modeling, and creating visualizations transform otherwise incomprehensible datasets into actionable information.

The threat to privacy from the use of big data analytics is clear. The entity performing the analysis could learn more about a person’s life than would be anticipated by a typical citizen, thereby violating the right to determine the flow and use of one’s personal information. By combining disparate data sources, these entities may be able to link online identities to real-world identities or find out about a person’s habits or personal information.Footnote 10 There is also a risk is that the analysis is wrong. Datasets, and the analyses carried out on them, always carry some form of bias, and such analyses can lead to false positives and negatives that decision-makers may later act on. Deploying resources to the wrong place or at the wrong time can cause significant harm to individuals. Even if the analysis is correct in identifying a human rights violation, the victims may be put at greater risk if publicly identified due to community stigma or retaliation by perpetrators.

Access and sharing applies to both data use and collection. The way the data is indexed or classified or the type of data collected can already reveal information considered private. For some human rights issues, personally identifiable information is needed to monitor, assess, and intervene in real time. Unauthorized sharing and access presents a major breach of privacy norms in the human rights context.

II Privacy Trade-offs, Urgency, and Temporality

Privacy considerations associated with applying big data analytics in the human rights context has received considerably less attention in the literature than issues like representativeness and validity. A recent volume edited by Philip Alston and Sarah Knuckey discusses new advances in technology and big data that are poised to transform the longstanding activity of human rights fact-finding.Footnote 11 In his contribution to that volume, Patrick Ball raises major reservations about using big data for human rights. Ball argues that sampling procedures and the data itself in many big datasets are riddled with biases.Footnote 12 Regardless of its “big-ness,” data that is not representative leaves out key populations. Incomplete data can hide the very populations most affected by human rights violations. For example, when estimating the number of victims in a conflict based on available data, those who were secretly killed by paramilitary groups and never reported in the media or government records are left out. The models and analyses resulting from such flawed data, Ball argues, can lead to faulty conclusions that could imperil or discredit human rights fact-finding and evidence gathering.

Patrick Meier, on the other hand, suggests an alternative perspective, buoyed by the high-profile applications of big data analytics in humanitarian crises.Footnote 13 From crowdsourced maps after the Haiti earthquake in 2009 to millions of tweets related to Hurricane Sandy in New York, such data is important and can help save lives. Meier concedes that big data can be biased and incomplete. Yet data and information on vulnerable populations are almost always lacking in completeness, even more so in the immediate aftermath of a crisis. Thus, big data, for all its flaws, can serve to inform decision-making in real time (i.e., during a crisis event) where comprehensive information does not exist.

One of the core questions that needs to be answered when using big data for human rights purposes is the extent to which the urgency of the need being addressed impacts the decision to use imperfect data and risk privacy violations. Consider, on the one hand, a fact-finding mission to ascertain whether a human rights violation took place in the past. Sometimes the data collection can take months or years in order to produce evidence for use in justice and accountability proceedings or for historical clarification. On the other hand, in a humanitarian response to a disaster or crisis, data collection seeks to intervene in the present, to find those in immediate need. Of course, temporality is not an absolute rule separating human rights and humanitarian domains. Data can be collected both in protracted humanitarian situations and when human rights violations are happening in the present. The issue at hand is whether an urgent situation places unique demands on the flow of personal information, which impacts one’s dignity, relationships with others, and right to privacy.

During humanitarian responses to time-bound crises like natural disasters, decisions must often be made between maintaining privacy and responding quickly by using available data that is often deeply personal. For example, in the response to the Ebola crisis, humanitarian organizations deliberated on how big data could be used legitimately.Footnote 14 Consider a responder requesting the mobile phone contact list of a person who tested positive for Ebola in an attempt to stop the spread of the disease. Further, consider a response organization asking a mobile phone company for the phone numbers and records of all the users in the country in order to trace the network of individuals who may have become infected. That data would need to be analyzed to locate those at risk, and personal information might be shared with other responders without the consent of the data subjects.

The assumption in such an operational decision is that saving the lives of the person’s contacts and protecting the health of the broader public outweigh the privacy concerns over that person’s personal information. Yet such decisions about trade-offs, particularly in the case of the Ebola response, remain highly controversial due to the potential privacy violations inherent in collecting the mobile phone records of individuals at a national level without the consent of individuals.Footnote 15 Even in an urgent humanitarian response context, there is little agreement about when it is appropriate and legitimate to limit privacy rights. Applying real-time data collection and analytics to the investigation of historical human rights violations could raise even more concerns.

When, if ever, is it appropriate to limit privacy rights in order to achieve human rights objectives? Big data can support human rights fact-finding by providing a basis for estimating the probability that an individual falls within a group that has been subjected to past rights violations. Data analytics can also personally identify an individual whose rights are being violated in the present. For example, access to millions of mobile phone records in a large city may reveal patterns of calls by individuals that suggest human trafficking for commercial sexual exploitation. An analysis of these “call detail records” can indicate the movement of people to buildings where exploitation is known to take place at unique times and pinpoint calls with known exploiters. Yet to establish whether a specific individual’s rights have been violated, personal information like cell phone numbers, names, and building ownership would need to be identified. Such data can reveal sensitive information (e.g., religion, political affiliation, or sexual orientation) that could lead to physical harm and retribution if shared with the wrong party.

Ultimately, permissible limitations on the right to privacy will vary depending on the urgency and importance of the objective to be achieved. The accompanying graph gives a visual sketch of the potential trade-offs involved in privacy. The right to privacy is not an absolute right, and it may be limited in order to achieve a legitimate objective. The extent to which the right is infringed, however, must be proportional to the objective. Thus, the graph illustrates a scenario involving a decision about the proportional relationship between the urgency of the objective of human security and permissible limits on privacy. In a human rights context, protecting the right to privacy is of high concern since safeguarding an individual's data also protects the right to freedom of expression, association, and related rights. Yet, when there is an increasing threat to human security and safety, such as the imminent danger of an individual’s death due to violent conflict, the concern over privacy may start to decrease. Perhaps a decision must be made about whether to obtain or release an individual’s personal mobile phone’s GPS coordinates without their consent or permission in order to attempt to locate and rescue that person to save his or her life. There may come an inflection point in the situation where the immediate danger to human security is higher than the protection of an individual’s privacy. When there is no longer an immediate threat to human security, there is every reason to uphold the integrity of the individual’s privacy right at a high level. Of course, there are a number of additional factors that might influence this analysis and decision. In some cases the release of an individual's data will expose that person to the very forces threatening his or her safety and security or may put another individual directly at risk. Clearly, more foundational work is needed to begin to understand how these trade-offs may be operationalize in practice when making a real-time decision.

Does this mean that we cannot use big data collection for long-term and continuous human rights monitoring? Long-term monitoring of digital data sources for human rights violations holds the promise of providing unprecedented insight into hidden or obscured rights abuses such as labor violations, sexual abuse/exploitation, or human trafficking. The use of big data analytics for monitoring human rights, even by human rights organizations, carries both known and unknown risks for privacy. Indeed, the very nature of data collection and analysis can conflict with normative expectations of privacy in the human right context. It is unclear whether the uncertain benefits of long-term human rights monitoring and advocacy can outweigh the very concrete risks to privacy that accompany the use of big data.

III Human Rights Monitoring and Surveillant Anxieties

The human rights field has a long tradition of employing methods and techniques for protecting privacy while monitoring rights abuses and violations. Determining “who did what to whom” involves identifying victims and alleged violators, doing an accounting of the facts, and collecting relevant information.Footnote 16 Ensuring the privacy and confidentiality of sources and victims is a critical concern in human rights fact-finding. A training manual published by the UN Office of the High Commissioner for Human Rights (OHCHR), for example, urges the use of monitoring practices that “keep in mind the safety of the people who provide information,” seeking consultation in difficult cases and maintaining confidentiality and security.Footnote 17 At the local level, Ontario’s Human Rights Commission states that data collection should include informing the public, consulting with the communities that will be affected, using the least intrusive means possible, assuring anonymity where appropriate, and protecting privacy.Footnote 18

In reality, though, ethical data collection protocols that protect privacy in the human rights context, such as obtaining informed consent, are extraordinarily difficult to utilize when deploying big data analytics. Big data analytics often relies on “found” data, which is collected without a user’s consent or even knowledge. It also necessarily involves using this information in ways not intended by the individual to whom it relates. Thus, with respect to big data, privacy harms are likely unavoidable.

The use of big data also harms privacy by adding to a growing, although ambiguous, sense of “surveillant anxiety,” in which we fear surveillance to the point that it affects our thoughts, behaviors, and sense of self. Kate Crawford describes this anxiety as “the fear that all the data we are shedding every day is too revealing of our intimate selves but may also misrepresent us… [N]o matter how much data they have, it is always incomplete, and the sheer volume can overwhelm the critical signals in a fog of possible correlations.”Footnote 19 A fact-finding mission that creates privacy risks is not equivalent to surveillance. And certainly not all surveillance technologies violate privacy in a legal sense. Yet any use of big data analytics to monitor human rights creates concerns and anxieties about surveillance, whether or not the surveillance is intended for “good.”

Thus, the issue that must be addressed is whether human rights researchers using big data analytics would themselves produce this kind of surveillant anxiety in their data subjects in ways that feel similar to traditional government or corporate surveillance. According to the Special Rapporteur on the right to Privacy, surveillance creates privacy risks and harms such that “increasingly, personal data ends up in the same ‘bucket’ of data which can be used and re-used for all kinds of known and unknown purposes.”Footnote 20 Although surveillance for illegitimate purposes necessarily violates privacy, even surveillance for legitimate purposes will do so if the associated privacy harms are not proportional to that purpose. For example, a public health surveillance program that continuously monitors and identifies disease in a population might constitute an appropriate use for a common good shared by many, but could still violate privacy if it produces undue harms and risks. As Jeremy Youde’s study of biosurveillance contends:

[T]he individual human right to privacy had the potential to be eroded thought the increased use of biosurveillance technology by governments and international organizations, such as WHO. This technology requires an almost inevitable intrusion into the behaviours, habits, and interests of individuals – collecting data on individual entries into search engines, Facebook entries in individual travel history and purchases.Footnote 21

In their work on disease surveillance in the United States, Amy L. Fairchild, Ronald Bayer, and James Colgrove document the conflict around public health surveillance during the early days of the AIDS crisis and the struggle to balance the need to collect and share medical records against charges of institutional discrimination against marginalized groups.Footnote 22

Big data collection and analysis by technology companies, sometimes called corporate surveillance, can produce the same kinds of anxieties. And big data collection by either governments or human rights organizations often rely on technologies that serve as intermediaries to the digital life of the public. Both a government agency and a human rights organization may collect data on the lives of millions of individuals from major social media platforms like Facebook. The very same tools, techniques, and processes in the collection and use of big data can be employed to both violate and protect human rights. The collection of one’s personal data by governments, corporations, or any number of other organizations using big data analytics may contribute to the constant feeling of being watched and curtail privacy and freedom of expression.

The pressing question is whether the use of big data by human rights organizations is permissible, given the risks to privacy involved. It is fair to say that human rights organizations should be held to the same standards that privacy advocates require of government. Yet the best intentions of human rights organizations using big data are not enough to protect privacy rights or automatically justify privacy violations. Furthermore, any organization collecting, classifying, and storing sensitive human rights data needs to address issues like data protection, secure storage, safe sharing, and access controls. If a human rights organization deploys data collection and analytic tools, how can they incorporate safeguards that responsibly address the inherent risks to privacy and minimize the potential harms?

IV Toward Guidelines for Big Data Applications in Human Rights

Because of concerns about privacy, value trade-offs, surveillance, and other potential risks and harms that may befall vulnerable populations, a rigorous assessment of the legitimacy and impact of the use of any big data analytics by organizations in the human rights context is vital. This begs the question of what type of guidelines could help steer such an assessment, particularly given the global proliferation of technologies and the plethora of context-specific harms. As the UN Special Rapporteur on Privacy states, “The nature of trans-border data flows and modern information technology requires a global approach to the protection and promotion of human rights and particularly the right to privacy.”Footnote 23 Human rights monitoring organizations may need to update their standards for data collection and analysis to take new technologies like big data into account.

New technological applications do not necessarily require entirely new high level principles. For example, the principles of safety, privacy, and confidentiality outlined in the OHCHR Training Manual would still apply to big data, but these principles may need further elaboration and development when they are applied to new information collection regimes. At the same time, new technological challenges need new solutions. For example, confidentiality policies may not be achieved simply through anonymization techniques alone, since the currently accepted fact in computer science is that no dataset can be fully anonymized.Footnote 24

A major advance in addressing the ethical and responsible use of data is the Harvard Humanitarian Initiative’s Signal Code, subtitled “A Human Rights Approach to Information during Crisis,” which focuses on the right to information in times of crises. Bridging the humanitarian and human rights contexts, the code states that the right to privacy is fundamental in crisis situations and that “[a]ny exception to data privacy and protection during crises exercised by humanitarian actors must be applied in ways consistent with international human rights and humanitarian law and standards.”Footnote 25 Other standards in the code include giving individuals the right to be informed about information collection and use as well as the right to have incorrect information about themselves rectified.

Policy work from other fields, such as international development, will also be relevant to creating guidelines for human rights. UN Global Pulse, the data innovation initiative of the UN Secretary General, has published its own “privacy and data protection principles”Footnote 26 for using data in development work. These principles recommend that actors “adhere to the basic principles of privacy,” “maintain the purpose for data,” and ensure “transparency [as] an ongoing commitment.”Footnote 27 Human rights organizations may also learn lessons from civil society’s demands of governments engaging in surveillance. The International Principles on the Application of Human Rights to Communications Surveillance suggests a number of guidelines designed to ensure that government surveillance does not infringe on the right to privacy, including notifying people when they are being watched.Footnote 28

This brings us to the issue of how to ensure the accountability of human rights organizations using big data for interventions or monitoring. Any actor that uses big data to intervene in or monitor human rights, whether a small domestic NGO or a large international organization, should be responsible for the potential risks and harms to the very populations it seeks to help. Yet since human rights organizations often hold governments to account, asking governments to regulate the use of big data by those very organizations will likely provide an avenue for state suppression of human rights monitoring. As such, traditional regulatory mechanisms are unlikely to be effective in this context.

One emerging framework that seeks to regulate any entity engaged in data collection is the EU’s General Data Protection Regulation, which comes into force in 2018.Footnote 29 This regulation requires all organizations collecting data on EU residents to follow privacy directives about data collection, protection, and consent. For example, individuals in the EU would have the right to withdraw consent given to organizations that collect or process their data. Such organizations would be required to alert the authorities if a data breach of personal information occurred. Any organization, including both companies and nonprofits, could incur heavy fines for noncompliance, such as 4 percent of global revenue.

Another possible avenue may lie in encouraging human rights organizations to engage with technology privacy professionals to assess the possible use of new technologies like big data. Professionals with the appropriate technical, legal, and ethical expertise may be capable of conducting privacy impact assessments of risks that may harm vulnerable populations before a new technology is deployed.Footnote 30 Or perhaps large donors can require grantees to follow data privacy protections as a condition of ongoing funding.

At the end of the day, though, the lack of certainty about the effectiveness of these approaches speaks to the pressing need for both more foundational research and context-specific assessments in this area. Addressing questions about context would require researchers to include more direct input and participation of the vulnerable communities themselves; for example, through field research. As Nissenbaum suggests, protecting privacy is about upholding the contextual integrity of the underlying norms governing information flow. Understanding norms around data privacy and consent, for example, should necessarily involve the communities where those norms exist.

Applying big data analytics in human rights work reveals tensions around privacy that need to be resolved in order to guide current thinking and future decisions. Unfortunately, a sustained knowledge gap in this area puts vulnerable populations at greater risk. And the anxieties and trade-offs around norms and interventions will be compounded as the human rights field addresses “newer” technologies such as artificial intelligence (AI).Footnote 31 Since AI is fueled by big data collection and analytics the same privacy concerns discussed above would apply. Furthermore, AI entails some form of automated decision making, which creates dilemmas over whether only a human rights expert, rather than a computer algorithm, can decide about proportionality and trade-offs between rights in real time. A research agenda should work toward guidelines, principles, and practices that anticipate the risks, costs, and benefits inherent in each process involved in emerging technological interventions in the human rights context.

8 The Challenging Power of Data Visualization for Human Rights Advocacy

John Emerson , Margaret L. Satterthwaite , and Anshul Vikram Pandey
I Introduction

In September 2007, The New York Times columnist Nicholas Kristof traveled with Bill Gates to Africa to look at the work the Bill & Melinda Gates Foundation was doing to fight AIDS. In an e-mail to a Times graphics editor, Kristof recalls:

while setting the trip up, it emerged that his initial interest in giving pots of money to fight disease had arisen after he and melinda read a two-part series of articles i did on third world disease in January 1997. until then, their plan had been to give money mainly to get countries wired and full of computers.

bill and melinda recently reread those pieces, and said that it was the second piece in the series, about bad water and diarrhea killing millions of kids a year, that really got them thinking of public health. Great! I was really proud of this impact that my worldwide reporting and 3,500-word article had had. But then bill confessed that actually it wasn’t the article itself that had grabbed him so much -- it was the graphic. It was just a two column, inside graphic, very simple, listing third world health problems and how many people they kill. but he remembered it after all those years and said that it was the single thing that got him redirected toward public health.

No graphic in human history has saved so many lives in africa and asia.Footnote 1

Kristof’s anecdote illustrates the sometimes unexpected power of data visualization: Expressing quantitative information with visuals can lend urgency to messages and make stories more memorable.

Data visualization is the “visual representation of ‘data,’ defined as information which has been abstracted in some schematic form.”Footnote 2 The use of data visualization can strengthen human rights work when data is involved, and it does something for the promotion of human rights that other methods don’t do. Combining data and visuals allows advocates to harness the power of both statistics and narrative. Data visualization can facilitate understanding and ultimately motivate action. And within human rights research, it can help investigators and researchers draw a bigger picture from individual human rights abuses by allowing them to identify patterns that may suggest the existence of abusive policies, unlawful orders, negligence, or other forms of culpable action or inaction by decision-makers. As human rights researchers and advocates look for new ways to understand the dynamics behind human rights violations, get their messages across, and persuade target audiences, they are also expanding the epistemology of advocacy-oriented human rights research. By broadening their evidence base and using new methods, human rights advocates come to know different things – and to know the same things differently.

The use of data visualization and other visual features for human rights communication and advocacy is a growing trend. A study by New York University’s Center for Human Rights and Global Justice reviewing all Human Rights Watch (HRW) and Amnesty International reports published in 2006, 2010, and 2014 revealed an increase in the use of photographs, satellite imagery, maps, charts, and graphs.Footnote 3 In some cases, data visuals augment existing research and communications methodologies; in other cases, they represent alternative and even novel tools and analytical methods for human rights NGOs.

While data visualization is a powerful tool for communication, the use of data and visualization holds exciting promise as a method of knowledge production. Human rights researchers and advocates are adding new methodologies to their toolbox, drawing on emerging technologies as well as established data analysis techniques to enhance and expand their research, communications, and advocacy. This chapter introduces ways data visualization can be used for human rights analysis, advocacy, and mobilization, and discusses some of the potential benefits and pitfalls of using data visualization in human rights work. After a brief historical review of data visualization for advocacy, we consider recent developments in the “datafication” of human rights, followed by an examination of some assumptions behind, and perils in, visualizing data for human rights advocacy. The goal of this chapter is to provide sufficient grounding for human rights researchers to engage with data visualization in a way that is as powerful, ethical, and rights-enhancing as possible.

II A Brief History of Statistical Graphics and Advocacy

Visual storytelling has a long and colorful history. Past generations not only created depictions of their reality, but also crafted visual explanations and diagrams to convey and understand the invisible forces governing the visible world and other realms beyond perception. In The Book of Trees: Visualizing Branches of Knowledge, Manuel Lima charts the use of the branching tree as a visual metaphor in charts from Mesopotamia to medieval Europe to the present.Footnote 4 In addition to visual storytelling, ancient civilizations developed visual methods to record, understand, and process large numbers. The ancient Babylonians, Egyptians, Greeks, and Chinese used visual systems to record data about vital resources, chart the stars, and map their territories.Footnote 5 While visual storytelling was intended for communication, data visualization was used by the ruling powers to interpret and keep tabs on their empires.Footnote 6

Along with the development of modern statistics in the late eighteenth century, there emerged graphical methods of quantitative analysis – new kinds of charts and graphs to visually show patterns in data.Footnote 7 The Scottish political economist William Playfair was a mechanical engineer, statistician, and activist who authored essays and pamphlets on the politics of the day and helped storm the Bastille in Paris in 1787.Footnote 8 That same year, he published the kind of line, area, and bar charts that we routinely use today for the first time in his Commercial and Political AtlasFootnote 9 to display imports and exports between Scotland and other countries and territories. He published the first modern pie chart in his 1801 Statistical Breviary.Footnote 10

In the first half of the nineteenth century, new technologies helped spread and inspire enthusiasm for both statistics and data visualization.Footnote 11 Commercial mechanical devices for counting, sorting, and calculating became popular, and the first successful mass-produced mechanical calculator was launched in 1820.Footnote 12 Printing technology, particularly lithography and chromolithography, enabled a more expressive range of printing. The Statistical Society of London was founded in 1834 and, by royal charter, became the Royal Statistical Society in 1887.Footnote 13 As the psychology scholar Michael Friendly notes, between 1850 and 1900, there was explosive growth in both the use of data visualization and the range of topics to which it was applied.Footnote 14 In addition to population and economic statistics, mid-nineteenth-century Paris saw the publication of medical and mortality statistics, demographics, and criminal justice data.Footnote 15

A few examples from this period show how data visualization contributed new findings using spatial and other forms of analysis, and allowed such findings to be made meaningful to a broader public via visual display.

In 1854, Dr. John Snow mapped cholera deaths around the thirteen public wells accessed in the Soho district of London.Footnote 16 Using this method, he made a dramatic discovery: there was a particular cluster of deaths around one water pump on Broad Street. His findings ran contrary to the prevailing theories of disease. At the time, the medical establishment believed in the miasma theory, which held that cholera and other diseases, such as chlamydia and the Black Death, were caused by “bad air.” Dr. Snow believed that cholera was spread from person to person through polluted food and water – a predecessor to the germ theory of disease. Despite skepticism from the medical establishment, Snow used his map to convince the governing council to remove the handle from the pump, and the outbreak quickly subsided.Footnote 17

Florence Nightingale is known primarily as one of the founders of modern nursing, but she was also a statistician who used data visualization to campaign for improvements in British military medicine.Footnote 18 In 1858, she popularized a type of pie chart known as the polar area diagram.Footnote 19 The diagram divides a circle into wedges that extend at different lengths from the center to depict magnitude. Nightingale used statistical graphics in her reports to members of Parliament about the condition of medical care in the Crimean War to illustrate how improvements in hygiene could save lives: at a glance, one could see that far more soldiers died of sickness than of wounds sustained in battle.Footnote 20 Nightingale persuaded Queen Victoria to appoint a Royal Commission on the Health of the Army, and her advocacy, reports, and the work of the commission eventually led to systemic changes in the design and practices of UK hospitals.Footnote 21

After a long and distinguished career as a civil engineer in France, Charles Minard devoted himself in 1851 to research illustrated with graphic tables and figurative maps.Footnote 22 His 1869 visualization of Napoleon’s 1812 Russian Campaign shows the march of the French army from the Polish-Russian border toward Moscow and back.Footnote 23 The chart was heralded by Minard’s contemporaries and is held up by twentieth-century data visualization critics as a marvel of clarity and data density. The visualization displays six different dimensions within the same graphic: The thickness of the main line shows the number of Napoleon’s troops; the scale of the line shows the distance traveled; rivers are depicted and cities are labeled; dates indicate the progress of the march relative to specific dates; the orientation of the line shows the direction of travel; and a line chart below the route tracks temperature. Reading the map from left to right and back again, another message beyond the data emerges: As the march proceeds and retreats, the horrific toll of the campaign slowly reveals itself as the troop numbers decline dramatically. The graphic not only details historical fact, but serves as a powerful antiwar statement.

In the United States, data visualization was used to sound the alarm on the frequency of racist violence and its consequences. In 1883, the Chicago Tribune began publishing annual data on lynching in the form of a monthly calendar listing victims by date.Footnote 24 The journalist and anti-lynching campaigner Ida B. Wells cited the data in her speeches and articles. The annual publication of state and national statistics fed the public’s outrage, and on September 1, 1901, the Sunday Tribune published a full front-page table of data on 3,000 lynchings committed over 20 years, as well as information about the victims of 101 lynchings perpetrated in 1901 and the allegations against the victims that their killers had cited to rationalize the violence.Footnote 25 Rather than focusing on individual cases, the data, table, and narrative exploration presented a powerful picture of the frequency and scale of the crisis: though lynchings occurred in mostly southern states, they were found in nearly every state of the union. The pages appeal to public opinion to support change, explicitly calling out the failure of state and local law enforcement and demanding congressional action.

These historic charts and graphs are analytical, but also rhetorical, using visual conventions to identify the dynamics of important phenomena and to communicate findings and make an argument and a persuasive case for policy change. We will investigate some of the promises and pitfalls of coupling visual rhetoric with data below, but first we briefly examine datafication and human rights.

III Datafication and Human Rights

As a field, advocacy-oriented human rights research traditionally favors qualitative over quantitative research methodologies. Research is typically driven by interviews with victims, witnesses, and alleged perpetrators, usually supplemented by official documents, secondary sources, and media accounts.Footnote 26 Additional qualitative methods, including focus groups and participatory observation, are used by some groups, as are forensic methods such as ballistics, crime scene investigations, and exhumations. Quantitative methods such as data analysis and econometrics have been very rare until recently. There are many reasons for the traditional emphasis on qualitative methods. Historically, advocacy-oriented human rights research developed out of legal and journalistic traditions.Footnote 27 Ethically, rights advocates are committed to the individual human story. Human rights practice has been defined as “the craft of bringing together legal norms and human stories in the service of justice.”Footnote 28

At the same time, researchers in social science, epidemiology, and other fields have long used quantitative methods for research on human rights related issues. Political scientists have developed cross-national time-series datasets to interrogate the relationships between human rights and major social, economic, and political processes.Footnote 29 Epidemiologists have studied inequalities in access to health care and disparities in health outcomes between social groups.Footnote 30 Research psychologists have examined the way human psychology may limit our ability to respond to widespread suffering such as that arising from genocide and mass displacement.Footnote 31

Human rights NGOs are increasingly embracing scientifically based methods of research that involve data and quantification, and they are beginning to use data visualization to reach broader audiences. Using data-driven methods from other fields enables different ways of knowing, of gathering and processing information, and of analyzing findings.

The spread of digital network infrastructure, increased computing speeds, and a decrease in the cost of digital storage have made collecting, sharing, and saving data easy and prevalent. The economic accessibility of mobile technology has made cell phones widely available, even in poor countries.Footnote 32 Smartphones have put Internet access and the production of digital content in the hands of the people – generating an enormous swarm of digital exhaust and big data about many populations across the world. This explosion of new data reflects a democratization of sorts, but it also puts a new means of surveillance at the command of state agents,Footnote 33 increases the power of private data owners and brokers, and creates pockets of digital exclusion, where communities that do not benefit from the digital revolution are further marginalized.

In this “datified” world, decision-makers seek evidence in the form of data and quantitative analysis. As Sally Merry notes, “quantitative measures promise to provide accurate information that allows policy makers, investors, government officials, and the general public to make informed decisions. The information appears to be objective, scientific, and transparent.”Footnote 34 While Merry’s language suggests that numbers themselves promise to smooth over the messiness of decision-making by appearing scientific, it is, of course, human beings who insist on quantification. Theodore Porter has identified quantification as a “technology of distance” capable of mediating distrust, such as that between governments and citizens.Footnote 35 In the human rights context, quantification sometimes functions as a way of disappearing the judgment-laden practices of monitoring and assessment, where governments may distrust their monitors as much as monitors distrust officials.Footnote 36 In such a context, knowing how many were killed, assaulted, or detained can be seen to satisfy a yearning for objective knowledge in a chaotic and often brutal world.

Metrics are also attractive because they can be weighed against other data and wrapped up into “indicators.” Kevin Davis, Benedict Kingsbury, and Sally Merry define indicators as:

a named collection of rank-ordered data that purports to represent the past or projected performance of different units. The data are generated through a process that simplifies raw data about a complex social phenomenon. The data, in this simplified and processed form, are capable of being used to compare particular units of analysis (such as countries or institutions or corporations), synchronically or over time, and to evaluate their performance by reference to one or more standards.Footnote 37

In the human rights realm, indicators have been developed, inter alia, to directly measure human rights violations,Footnote 38 assess compliance with treaty norms,Footnote 39 measure the impacts of company activities on human rights,Footnote 40 and ensure that development processes and humanitarian aid are delivered in a rights-respecting manner.Footnote 41 As Merry notes in The Seductions of Quantification, indicators are attractive in their simplicity, particularly country rankings that have proven effective in catching the attention of the media and the public.Footnote 42 While indicators provide a convenient analysis at a glance, they are complicated and often problematic: Data collected may be incomplete or biased, not comparable between countries, or compromised by encompassing metrics of behavior that may not capture a diversity of values or reasons for the behavior.Footnote 43 There may also be a slippage between the norm and the data used to assess the norm, a dynamic in which difficult-to-measure phenomena are assessed using proxy indicators that may become attenuated from the original norm.

In the human rights field, the relationship between the norm and the data can be especially complicated. Human rights data is almost always incomplete and often fraught with bias and assumptions.Footnote 44 The imperfect nature of human rights data is a consequence of the challenges facing its collection. There are inherent difficulties in getting a complete or unbiased dataset of anything, but it is particularly challenging when it is in a government’s self-interest to hide abuses and obstruct accountability. Marginalized groups may be excluded from the available information as a result of implicit bias, or even by design.Footnote 45 For human rights researchers, there may be dangers and difficulties associated with asking certain questions or accessing certain information. Much of the data gathered about civil and political rights violations is collected through case reports by human rights organizations, making it inherently biased by factors such as the organization’s familiarity and accessibility, the victims’ willingness to report, and the security situation.Footnote 46 Data about economic and social rights may seem easier to gather, since there is a plethora of official data in most countries about education, housing, water, and other core rights. This data is not designed to assess rights, however, meaning that it is, at best, proxy data for rights fulfillment.Footnote 47

However, even when there are flaws in the data collection or the data itself, the results can sometimes be useful to researchers and rights advocates. For instance, if the methodology for gathering data is consistent year after year, one may be able to draw certain types of conclusions about trends in respect for rights over time even absent a representative sample. If the data in question was collected by a government agency, it can be strategic for activists to lobby the government using its own data despite the flaws it contains, since such a strategy makes the conclusions that much harder to refute.

Further, a great power of statistics is the ability to work with data that is incomplete, biased, and uncertain – and to quantify bias and uncertainty with some measure of precision. Patrick Ball, Megan Price, and their colleagues at the Human Rights Data Analysis Group have pioneered the application of multiple systems estimation and other statistical methods to work with limited data in post-conflict and ongoing conflict zones.Footnote 48 The group is often asked to evaluate or correct traditional casualty counts using their experience with statistical inferences.Footnote 49 They have contributed data analysis to both national and international criminal tribunals and truth commissions.

IV Challenges of Data

Data is always an abstraction – a representation of an idea or phenomenon. Data is also a product of its collection method, whether it is a recording of a signal, survey, mechanical trace, or digital log. As the scholar Laura Kurgan explains:

There is no such thing as raw data. Data are always translated such that they might be presented. The images, lists, graphs, and maps that represent those data are all interpretations. And there is no such thing as neutral data. Data are always collected for a specific purpose, by a combination of people, technology, money, commerce, and government. The phrase “data visualization” in that sense, is a bit redundant: data are already a visualization.Footnote 50

Analysts may try to use algorithms and data to limit human bias and preconceptions in decision-making. However, researchers can’t help but cast a human shadow on facts and figures; data is affected by people’s choices about what to collect, when and how it is collected, even who is doing the collecting. Human rights researchers have begun to call attention to these hidden aspects of data gathering and analysis, examining the rights implications of their elision, and the perils and promise in their use.

For example, data-driven policing based on computerized analyses of arrest and crime data has been advanced as a method for making law enforcement less prone to bias.Footnote 51 However, the use of algorithms and visualization in such “predictive policing” often amplifies existing assumptions and historical patterns of prejudice and discrimination, driving police to increase scrutiny of already over-policed neighborhoods.Footnote 52 A human rights critique is needed to assess the use of algorithms in predictive policing as well as other practices, like the use of computer scoring to recommend sentencing ranges in overcrowded justice systems.Footnote 53 Techniques developed in the algorithmic accountability movement are especially useful here: Audits and reverse engineering can uncover hidden bias and discrimination,Footnote 54 which could be assessed against human rights norms.

Metadata describes the origin story of data: the time and place it was created, its sender and receiver, the phone used, network used, IP address, or type of camera. As former US National Security Agency General Counsel Stewart Baker hauntingly put it, “Metadata absolutely tells you everything about somebody’s life. If you have enough metadata, you don’t really need content.”Footnote 55 Outside the domain of state security or intelligence, metadata can be useful to human rights researchers and activists as well, for instance, to counter claims that incriminating images or video recordings were falsified or to corroborate that a set of photos were taken in the same place, on the same day, by the same camera. Amnesty International used metadata from photos and videos to corroborate attacks on suspected Boko Haram supporters by Nigerian soldiers, thereby implicating them in war crimes.Footnote 56 Building on its experience training grassroots activists to use video for advocacy, the NGO WITNESS worked with the Guardian Project to develop a mobile application called CameraV to help citizen journalists and human rights activists manage the digital media and metadata on their smartphones by automatically encrypting and transmitting media files to a secure server, or, conversely, by deleting and obscuring the metadata when it could put activists at risk.Footnote 57

In addition to concerns about accuracy, rights groups should be cautious in their approach to privacy and ownership of data, and to its analysis and expression through visualization. Over the years, researchers and lawyers have developed a set of best practices to guide the proper collection and use of data, with particular attention to human subjects research.Footnote 58 Questions related to the collection of data go to the heart of what constitutes ethical research methods: Did the subjects give informed consent regarding the way their personal data would be used? Does using, collecting, or publishing this data put anyone at risk? Is the data appropriately protected or anonymized? The rules about data continue to evolve and are not without gray areas and open questions. Universities in the United States and many other countries have review processes in place to provide guidance and ensure that critical ethical questions are raised before research is approved. In fact, these ethical questions and review processes are required under US law for research institutions that receive federal funding. However, the “common rule” underlying these processes is widely seen as out of date when it comes to data ethics – especially big data ethics.Footnote 59 Ethical discussions and guidelines about data visualization are almost nonexistent, with a 2016 Responsible Data Forum on the topic a very welcome outlier.Footnote 60 The forum brought together academics, activists, and visualization practitioners to discuss issues such as the ethical obligation to ensure that data is responsibly collected and stored before being visualized; representing bias, uncertainty, and ambiguity in data visualization; and the role of empathy and data visualization in social change.Footnote 61

V Visualizing Quantitative Data

With data and data visualizations, physical phenomena like the impact of disease and the movement of troops can become legible – as can systems like economies, relationships, and networks of power. Using data to examine policies, populations, actions, and outcomes over time, individual cases can be seen as instances of widespread and systematic patterns of abuse. However, possession of data does not constitute knowledge. Data requires interpretation, context, and framing. Graphics are a powerful way to help contextualize and frame data, present interpretations, and develop understanding. Through visualization and analysis, correlations and patterns of structural violence and discrimination as well as the scope or systemic nature of abuses can become clear.

Data visualization is useful not only for explaining patterns in a dataset, but also for discovering patterns. For its 2011 report A Costly Move, HRW used mapping tools to visualize and analyze patterns in a large dataset of more than five million records concerning the transfer of immigrant detainees around the United States.Footnote 62 Analyzing twelve years of data, the group found that detainees were transferred repeatedly, often to remote detention centers, a process that impeded their right to fair immigration proceedings. In 2000, HRW and the American Association for the Advancement of Science visualized statistical analyses of extrajudicial executions and refugee flows from Kosovo to Albania in 1999. Instead of random violence, they found distinct surges of activity that suggested purposeful, planned, and coordinated attacks by government forces.Footnote 63

Data is particularly useful to those seeking to understand structural, systemic violations such as abuses of economic, social, and cultural rights. Taking data from development surveys, activists have used data visualization to compare trends in health,Footnote 64 education,Footnote 65 housing,Footnote 66 and other areas against government budgets, tax revenues, and other economic data to paint a picture of progressive realization of rights against “maximum available resources,” as outlined in the International Covenant on Economic, Social, and Cultural Rights.Footnote 67

Within the context of communications and advocacy, one powerful characteristic of data visualization is that it is perceived as scientific. In one study, Aner Tal and Brian Wasnick found that including visual elements associated with science, such as graphs, can enhance a message’s persuasiveness.Footnote 68 Our research group at New York University also found that when viewers did not already hold strong opinions against the subject matter, graphics presented in diagrams or charts were more persuasive than the same information presented in a table.Footnote 69 In another study, on people’s ability to remember visualizations, Michelle Borkin and colleagues found that specific visual elements of a given presentation affected its memorability. Memorable graphics had a main visual focus, recognizable objects, and clear titles and annotations.Footnote 70

The persuasive power of charts and graphs may come at a cost: In a report full of text, numbers crucial to making a rights case are a prime target for attack and dispute.Footnote 71 The currency of human rights work is credibility, and researchers and program staff at human rights organizations carry the additional burden of having to take special care to protect their credibility and the incontrovertibility of the evidence they present. If a number in a report is convincingly challenged, the rest of the report may be called into question. Charts and numbers are also easily taken out of context, with readers understanding representations of data as statements of fact. This is all the more reason to interrogate the methodology and unpack conditions of production of specific datasets before they are used in visualizations. The powerful impact and memorability of data visualization come with a responsibility to put this knowledge to use with care and attention to the potential ethical pitfalls.

Effective data visualization can make findings clear and compelling at a glance. It provides readers with an interface to navigate great quantities of data without having to drill down into the various data points. This can obscure the fact that visualization is only a part of working with data – and often only a small part. The lead-up to the creation of a data visualization can be the key to its usefulness. Acquiring, cleaning, preparing, and analyzing data very often make up the bulk of the work. When exploring a visualization, the sometimes tedious and decidedly unsexy data work that has been done behind the scenes is not always immediately visible. And given its persuasive power, data visualization in polished and final form may gloss over issues with data collection or analysis. This may be especially true with human rights visualization, where analysis includes normative judgments about the fit between a given dataset and the legal standards at issue.

As noted above, the data used in visualization is subject to bias and the underlying assumptions around data collection and processing. In addition to this, the presentation and design of visualization is also susceptible to distortion and misinterpretation. In a series of experiments performed by our research group at NYU, empirical analysis of common distortion techniques found that these techniques did indeed mislead viewers. Distortion techniques include using a truncated y-axis (starting at a number greater than zero when illustrating percentages) or using area to represent quantity (such as comparing areas of circles.)Footnote 72 While manipulation of the facts or deception of the reader is usually unintentional in the human rights realm, accidentally misleading visualizations can affect the clarity of the message and could damage advocacy efforts.Footnote 73 As data and visualization command attention, they can also become the focus of criticism. If a misleading visualization is called to account, it could distract from the credibility of the rest of a given project’s research and advocacy, and perhaps even damage the reputation of the organization.

These risks must be borne in mind as advocates have the opportunity to analyze and visualize the increasing quantities of data made available online as a result of open government efforts. While the call for open sharing of scientific data long predates the Internet, connectivity has spurred an explosion in the use, production, and demand for high-quality data, particularly data collected by government agencies. Governments are sharing great quantities of data online, and making them accessible via Freedom of Information or other “sunshine” requests. Open government data has been used to uncover and analyze patterns of human rights abuse in criminal justice data,Footnote 74 inequalities in wage data,Footnote 75 unequal burdens of pollution,Footnote 76 and the impacts of climate change in environmental data.Footnote 77 Data created to track human development, like that collected by international demographic and health surveys, has proven to be fruitful for human rights analysis as well.Footnote 78 Under the title “Visualizing Rights,” the Center for Economic and Social Rights (CESR) has published a series of country fact sheets that use publicly available data for analysis and visualization to convey patterns of discrimination and the failure to fulfill rights obligations, e.g., the rights to health, food, and education in Guatemala,Footnote 79 or with regard to poverty, hunger, and housing in Egypt.Footnote 80 The CESR briefs are designed to be read at a glance by a busy audience of policy officials and individuals who staff intergovernmental human rights mechanisms.

VI Visualizing Qualitative Data

A core output of traditional human rights research is the fact-finding report, which tends to rely on qualitative data such as the testimony of witnesses and survivors of human rights violations.Footnote 81 Visualization can provide useful context for the broader rights messages in such reports. For instance, to provide visual and spatial context for the findings presented in human rights reporting, it is not uncommon to include a timeline of events,Footnote 82 a map of the areas affected,Footnote 83 or a map of towns the researcher visited.Footnote 84

Qualitative visualization for human rights generally falls into the category of visual storytelling. Techniques like breaking down an explanation into stages and walking the audience through these stages can elucidate the narrative, building an understanding of the sequence of events or the layers of information. Examining Amnesty International and HRW reports, the 2016 NYU study found that the use of visual features nearly tripled between 2006 and 2014, and that the majority of visual features used were qualitative.Footnote 85 For example, the study found that the number of reports using satellite images increased from one in 2006 to four in 2010 to seventeen in 2014. Most maps included in reports during this period displayed geographic information (such as places visited by researchers) and were only rarely used for quantitative display or analysis (such as displaying numbers of refugees).

Some of the changes in human rights reporting were made possible by advances in technology and newly available data. In the 1990s, Global Positioning System (GPS) and high-resolution satellite imagery became available for civilian use.Footnote 86 Since then, high-quality satellite imagery has become increasingly accessible from vendors and through free applications like Google Earth and other web-based mapping tools. Human rights groups have used GPS and satellite imagery to present vivid pictures of changes brought about by events such as mass violence, secret detention, extrajudicial executions, internal displacement, forced evictions, and displacement caused by development projects.Footnote 87 Satellite imagery has proven especially powerful in showing the visual differences before and after an event.Footnote 88 It can show the creation or destruction of infrastructure by marking changes in the landscape designed to hide underground weapons development, or migrations of people by tracking changes in the contours of refugee camps. Satellite images provide local activists with a way to contextualize and document a bigger picture than can be seen from the ground, and enable human rights researchers outside of the country to survey places that are difficult or dangerous to access.Footnote 89 These techniques are especially crucial for closed states and in emergency contexts, though researchers based outside of the countries of interest should avoid relying solely on geospatial analysis, since it may not include local voices or context. Integrating local voices with satellite imagery provides both the “near” and the “far” and paints a more complete picture of the situation on the ground.Footnote 90

Network graphs are another visual tool that can help illuminate human rights reporting and narrative. Network graphs are a special kind of visualization showing relationships between and among entities. A family tree is one simple example of a network graph. Networks relevant to human rights investigations include networks of corruption, formal and informal chains of command, the flow of resources among industries and the government agencies charged with regulating them, and relationships between military and paramilitary groups. Visualization serves as a useful shorthand, a way to illustrate complex networks that would be cumbersome to describe in text. For example, for its 2003 report on violence in the Ituri region of the Democratic Republic of Congo, HRW used a network graph to illustrate the web of training, funding, and alliances among national governments, national militaries, and local paramilitary groups.Footnote 91 The graph clarifies the complicity of the national governments in local atrocities. The 2007 Global Witness report Cambodia’s Family Trees uses a network graph to illustrate the connections and relationships among more than sixty individuals and family members, companies, the military, and government agencies in a deeply entrenched web of corruption around illegal logging.Footnote 92

Graph theory, the study of networks and their properties, can be used to model the spread of information or influence along social networks. One of Google’s early innovations was analyzing the network structure of the Internet – i.e., determining which pages are linked to from other pages – in order to rank web pages by relevance. Graph theory algorithms that weigh connections among entities to gauge their importance have proven useful to help navigate millions of pages in document dumps such as WikiLeaks and the Panama Papers. Network analysis and visualization can help make these large sets of data navigable and give researchers and the public a starting point toward understanding connections between parties. Like statistics, network analysis is a tool of social scientists that is increasingly being used by human rights researchers. As noted in Jay Aronson’s chapter (Chapter 6), the Carter Center has used network analysis of social media postings by armed groups in Syria to estimate chains of command and track emerging and shifting alliances among groups.Footnote 93

At the nexus of qualitative and quantitative analysis, Forensic Architecture is an international collaboration that is researching incidents around the world through crime scene reconstructions of human rights violations. Founded by the architect Eyal Weizman, Forensic Architecture uses diverse sources, including photos, cell phone audio and video, satellite imagery, digital mapping, and security camera and broadcast television footage, to painstakingly reconstruct the scene of a violation as a virtual three-dimensional architectural model. The team looks at traces and clues in the data sources. For instance, ascertaining the time of an incident from time stamps on digital metadata and even the fall of shadows in imagery and footage allows the team to establish a sequence of events and uncover falsifications or omissions in recordings. The reconstructions go so far as to adjust the virtual camera lens to match the parallax distortion of video, allowing for analysis of things like line of vision or the position of a munitions impact. The spatial data and architectural model become the nexus that stitch together the reconstruction to determine just what happened at a given point in time and space, how it happened, and who was responsible.Footnote 94

Recent developments in machine learning have also made possible a kind of qualitative data analysis of imagery by computers: the use of computer vision algorithms to detect patterns and recognize objects depicted in digital image data. This can include faces or pictures of weapons in massive bodies of social media images, or feature detection in footage from camera-enabled drones, closed-circuit video surveillance, or satellite imagery. Applying these techniques to human rights research, the Event Labeling through Analytic Media Processing (E-LAMP) project at Carnegie Mellon University combines computer vision and machine learning for conflict monitoring by searching through large volumes of video for objects (weapons, military vehicles, buildings, etc.), actions (explosions, tank movement, gunfire, structures collapsing, etc.), written text, speech acts, human behaviors (running, crowd formation, crying, screaming, etc.), and classes of people such as soldiers, children, or corpses.Footnote 95 Project ARCADE is a prototype application that uses computer vision to analyze satellite imagery in order to automate the detection of bomb crater strikes and determine their origin.Footnote 96 In these instances, after its algorithmic processing, the source imagery is often annotated with visual indicators that are more readily interpreted by humans and the image data is made understandable through a layer of visualization. Such applications are likely to become more common in the human rights field, though, as noted above, machine learning is only as good as its input and the assumptions embedded in it.

VII Technical Decisions Convey Meaning

Given the vital role that data visualization can play in analyzing data and delivering a persuasive human rights message, there is great temptation and good reason to incorporate it into the process of human rights research and its outputs, such as reports and other advocacy products. However, the power of data visualization must be harnessed with care. The techniques of data visualization are a form of knowledge production, with constraints and connotations associated with forms and their interpretation. The meanings conveyed by seemingly technical decisions must be unpacked and considered when designing human rights visualizations.

The keystone technique in the field of data visualization is “encoding” to visually identify a specific aspect of a dataset.Footnote 97 Visual encoding associates visual properties like location, color, shape, texture, and symbol with data properties like time, category, and amount. More than one variable and visual encoding can be combined within the same representation, such as x-axis for time, y-axis for magnitude, and color for category.Footnote 98 Variables need not be strictly visual, either. Data can be encoded using different aspects of sound (tone, volume, pitch) or touch (height, location, texture).Footnote 99 A great power of visualization is the ability to combine multiple encodings within the same visual space, enabling rich exploration. Some kinds of data, like geographic data, can also act as a bridge between other kinds of data. For instance, poverty rates in a particular geographic area can be compared against resources and services available in the same geographic area.Footnote 100 A standard collection of chart styles has emerged as conventions, and these chart styles come preloaded in popular data tools like Microsoft Excel. As one’s audience becomes familiar with certain chart forms, their legibility is reinforced. As the typographer Zuzanna Licko once noted, “You read best what you read most.”Footnote 101 The pie chart, bar chart, and line chart have become visual conventions. However, the form of a visualization and its interface are not neutral; it constitutes a choice, a way of structuring experience and knowledge. Form imposes its own assumptions and arguments, and the most familiar and common forms of data visualization may not always be suitable for data presentation and can, in some cases, obscure or even distort findings. In the human rights context, it is important to reflect on design decisions and visual conventions – especially when designing research-based products for diverse and often international audiences.

Visual tone can also carry connotations. For instance, does a visualization make beautiful something monstrous and tragic? This is an especially important query in a field marked by its “affect-laden conception of humanity.”Footnote 102 Presenting visualizations in a “neutral” style could downplay the assumptions behind the data and its collection and analysis. Seemingly less weighty design decisions can also affect the way a map or other graphic is perceived; for example, design decisions that affect color contrast and legibility can obscure a graphic’s meaning or, at their worst, create a misleading visualization.

Color can also carry cultural weight. Consider the color orange, which is associated with Hindu nationalists in India and with Unionism and the Orange Order in Northern Ireland, and was used by groups that participated in the 2004–05 “Orange Revolution” in Ukraine. Depending on how color is used in a given visualization, such associations can invoke secondary cultural meanings, particularly when representing geopolitical data across borders.

VIII Access and Inclusion

Particularly for the sake of advocacy, outreach, and transparency, it is important for human rights information to be accessible and inclusive. Though a strength of data visualization is the ability to invite readers to engage with analysis and communications, an ongoing challenge is access and inclusion. How can human rights researchers and NGOs make their visualizations accessible to the populations to whom their data is relevant? Or work with communities to help them access the tools and expertise necessary to generate their own data visualizations?

One challenge for interactive digital visualization is the physical constraints of the screen, particularly of small mobile screens. The popularity of smartphones and widespread mobile Internet access have overtaken desktop and broadband access in much of the world.Footnote 103 While the increasing ubiquity of access is promising, the smaller screen size poses a challenge to making complex data visualizations legible and interactive.

The physical attributes of a visualization and its interaction can profoundly affect how it is accessed and understood by users of different abilities. Limited motor control, color blindness, color vision deficiency, restricted vision, or blindness can affect how a visualization is read. The visualization community has made great strides toward awareness of color blindness and color vision deficiency. But while tools are available to check the accessibility of color use, there is still far more work to be done to make visualizations accessible to blind users. Can visual information be accessed through other means as well, such as accessible HTML, that can be processed by automated screen-reader software? Can the visualization be navigated with a keyboard instead of only by a mouse?

In addition to visual encodings, data visualization relies on culturally coded visual metaphors: an “up” symbol or bigger size means “more,” time moves from left to right, clusters indicate similarity, big is important, etc. As a result, reading, interpreting, and understanding data visualizations require a certain degree of cultural and visual literacy. In the case of quantitative data visualizations, numeracy is key. The success of a data visualization for both analysis and advocacy relies not only on the visualization itself, but on its accessibility to the reader. For its advocacy work to promote better health, the design team behind the now ubiquitous Nutrition Facts label tested more than thirty variations of charts and other formats before settling on the organized table we know so well. They found that graphs, icons, and pie charts were more complicated for consumers than they’d originally thought, requiring a relatively high degree of visual literacy to understand.Footnote 104

IX Critical Mapping

Mapping is a particularly popular form of data visualization being used in human rights research and advocacy today. Critical cartography is a set of practices and critiques based on the premise that maps are not neutral; it holds that visual design decisions about what to include or exclude, what boundaries to show, etc., are political expressions about space and power. Authors choose the data to include or exclude and decide how to highlight it. Marginalized populations may be excluded from maps for a variety of reasons, or maps may privilege spaces of commerce over spaces of community. Commercial vendors may not consider it profitable to digitize the streets and addresses of villages in developing countries. State statistical agencies with limited resources must inevitably prioritize their activities and focus.

Even when focusing on specific evidence, decisions about what to include or exclude and how to represent visual elements can carry political implications; histories can be contentious, particularly where nationalism and national identities are woven into narratives of conflict. The drawing of maps can raise human rights issues in the act of visualization itself. For example, borders and place names can be particularly contentious. The government of China, for instance, takes border demarcations very seriously – confiscating maps that, through their visuals, “violate the country’s positions on issues such as Taiwan, islands in the South China Sea or territory in dispute with India” or that reveal “sensitive information.”Footnote 105 Lawmakers in India also considered a draft bill threatening fines and jail time for depicting or distributing a “wrong or false” map of its borders.Footnote 106 Human rights researchers need to take political sensitivities and significance into account when they engage in visualization involving maps – both the visual expression and where sensitive interactive media is hosted.

Mapping for social justice purposes has a long history, from the historical examples above to the current use of digital mapping. A powerful example of “counter-mapping,” the Detroit Geographic Expedition was formed after the 1967 Detroit riot to conduct and publish research on racial injustice in Detroit and offered free college courses on geography and urban planning for inner-city African American students. The group critically challenged plans put forward by the Board of Education, visualized inequities of Detroit’s public spaces for children’s play, and mapped traffic fatalities of children along commuter routes, all of which pointed to patterns of spatial and racial injustice in the built environment.Footnote 107

To claim the power traditionally held by governments and companies, grassroots organizations and individuals are also using tools for digital mapmaking. For example, in 1993, Daniel Weiner and Trevor Harris worked with communities in the central lowlands of South Africa to develop participatory applications of GIS in support of the redistribution of natural resources in the post-apartheid transition.Footnote 108 By far the largest current open mapping collaboration is OpenStreetMap, a free editable map of the world. An ecosystem of tools has been developed around OpenStreetMap data, including some specifically designed for supporting humanitarian responses to crises.Footnote 109 Other projects are using mapping to capture local knowledge in order to assert claims of land ownership. Inspired by their work with indigenous communities in the Amazon, the organization Digital Democracy developed a method for contributing data to OpenStreetMap without having continuous access to the Internet.Footnote 110 The organization Hidden Pockets found that sexual and reproductive health services in Delhi were absent from Google’s map, so set out to track this data and create its own publicly available map.Footnote 111

X Mobilization and Outreach

Data visualization can be a powerful vehicle for collaboration and mobilization in human rights outreach and advocacy. It can be used to interface with activist members, donors, and allies. For instance, using visualization, one can illustrate the impact of one’s findings or recommendations to present a compelling vision of what is possible. Using data visualization not only to describe systemic abuses, but also to render a concrete vision of the future and project an alternative vision and message of hope, can be a powerful way to mobilize supporters. Visually mapping the activities of supporters also provides participants with a visual overview of activities and creates a virtuous cycle of feedback as well as a sense of both transparency and solidarity. Making feedback visible can be an effective way of engaging participants to build solidarity and momentum.

Data visualization for advocacy can also be participatory in public space, inserted into the world. For example, in February 2009, a series of large-scale projections were displayed at sites across the center of Bristol, England. Using a powerful video projector, organizers displayed on building facades the line of the future water level anticipated due to climate change.Footnote 112 On a smaller scale, in 1991, the artist Félix González-Torres created an emotionally powerful series of portraits of friends with HIV/AIDS using piles of candy to match their body weight. Viewers were encouraged to take a piece of candy as they passed each portrait, thereby reducing the weight of the pile, performing, in essence, the wasting and loss of weight caused by the illness before each person’s death, and quietly implicating themselves.Footnote 113

XI Technical Sustainability

Data visualizations are often presented in paper copies of reports and briefs, but they also figure prominently in web-based communications by human rights organizations, thus making an understanding of that technology essential to maintaining best practices with respect to data and data visualizations. A growing number of human rights visualizations have also moved beyond the presentation of a single view of a given dataset, and instead allow users to explore data in online applications. However, online databases sometimes require ongoing maintenance, particularly when they rely on external services such as maps or timelines hosted by third parties. Amnesty International launched a series of interactive data sites in 2007 with “Eyes on Darfur,” followed by “Eyes on Pakistan” in 2010, “Eyes on Nigeria” in 2011, and “Eyes on Syria” in 2012, to map human rights abuses in those regions. Whereas the 2007 “Eyes on Darfur” project hosted satellite imagery on Amnesty’s own web server, the Syria and Nigeria sites used Google Maps to display points of interest. Google Maps provides a low-cost, easy-to-use, web-based interface to map information, which is attractive for human rights organizations with budgetary constraints. However, as Amnesty International’s experience illustrates, reliance on a third-party service provider comes with a long-term cost: By 2016, Google had updated its interface and both “Eyes on Syria” and “Eyes on Nigeria” no longer functioned. Amnesty would be required to update the back-end code for these sites in order to continue to plot information on Google Maps. Though human rights and humanitarian crises continue in these countries, the “interactive evidence” in Amnesty International’s visualization has become inaccessible.

Periodic updating of interactive sites is, in fact, essential to maintaining their accessibility and relevance. While “Eyes on Darfur” continues to function, its use of FlashFootnote 114 to display information makes it inaccessible on tablet and mobile devices, which have become popular since the site’s development in 2007 and do not support Flash. The Pakistan site featured “a geocoded database of more than 2,300 publicly reported incidents occurring between 2005 and 2009.”Footnote 115 A collection of rights-related incidents of that scale represents a treasure trove of possible human rights cases and a powerful baseline by which to judge reports of ongoing abuse. As of 2016, however, eyesonpakistan.org is no longer online and its database is no longer accessible. The apparent failure to prioritize or plan for the demands of changing technology and the lack of ongoing technical support means it is no longer possible for human rights researchers and advocates to use the Syria, Nigeria, or Pakistan data.

One way to mitigate the obsolescence that plagued the Amnesty sites is to make both the data and the source code of a visualization readily available for download by users. This would allow visitors to access the raw data regardless of the technical implementation of the interactive interface. Though still rare among human rights NGOs, this is a growing practice among news organizations. The evolving field of data journalism offers one view of things to come. “Computer-assisted reporting” and “data-driven journalism” use spreadsheets and databases to help find patterns in data by using statistical methods and other techniques from the social sciences. The findings of the investigations are often presented through interactive news applications and data visualizations that can engage readers with data-driven reporting in a richer way, beyond the constraints of print. News organizations are increasingly posting these projects, their data, and tools to code-sharing websites like GitHub for others to download, use, and modify.Footnote 116 Like human rights organizations, journalists face limited resources and technical overhead, but news media are, thus far, more readily embracing the use of data analysis to drive investigations and visualization for effective storytelling. Any effort to share the data behind human rights reporting or visualization will need to carefully grapple with crucial ethical and security challenges, including confidentiality and anonymization, consent, and potential misuse of the data by abusive governments or other opponents.

XII Conclusion

Human rights researchers and advocates are adding new methodologies to their toolbox, drawing on emerging technologies as well as established data analysis techniques to enhance and expand their work. Data visualization holds exciting potential, bringing new techniques of knowledge production, analysis, and communication to human rights research and advocacy. Organizations are increasingly recognizing the power of data visualization to support human rights analysis and arguments to help make a memorable and persuasive case for change.

Enabled by digital technology and engagement with data, effective visualization is a powerful tool for understanding social problems and their potential solutions. While journalism and academic disciplines, including the social sciences, are using data visualization for both analysis and communication, the human rights field is just beginning to tap its potential. As interest grows, human rights organizations will need to struggle with the ethical and practical considerations of producing data visualizations.Footnote 117 More research is still needed on the effective use of data visualization and human rights. Used in a principled way, however, data visualization can benefit human rights researchers and advocates, and those whose rights are in danger. It can help researchers identify patterns and trends; clarify a call to action; make data analysis compelling, understandable, and interactive; rally supporters; and perhaps even visualize the effects of activism itself.

9 Risk and the Pluralism of Digital Human Rights Fact-Finding and Advocacy

Ella McPherson
I IntroductionFootnote 1

The rise of information and communication technologies (ICTs) has captivated many human rights practitioners and scholars. Particular interest, mine included, is focused on the potential of using ICTs to support the pluralism of human rights fact-finding and advocacy.Footnote 2 In theory, now anyone with a cell phone and Internet access can document and disseminate evidence of human rights abuses. But what happens when this theory is put into practice?Footnote 3 What happens when ICTs are adopted in empirical realities shaped by unique contexts, distributions of resources, and power relations?Footnote 4 I will argue that, while the rise of ICTs has certainly created new opportunities, it has also created new risk – or negative outcomes – for human rights practitioners. This risk is silencing, and unequally so.

In this chapter, I focus on human rights fact-finding and advocacy from the perspective of practitioners at human rights NGOs, while acknowledging that the range of practices and actors involved in human rights work is much broader.Footnote 5 These practices form a communication chain: information moves from witnesses on the ground to human rights practitioners during fact-finding, who gather and evaluate this information for evidence of violations. This evidence is then packaged and communicated to audiences such as journalists, policy-makers, and publics as advocacy work designed to impel change through persuasion.Footnote 6 At each stage, we can think of successful communication as speaking to and being heard and understood by intended audiences.Footnote 7 In other words, it is about audibility (or its equivalent, visibility). Unsuccessful communication, in contrast, involves either audibility to unintended audiences or inaudibility to intended audiences. Successful communication can also have unsuccessful outcomes for the actors involved, as when a message is audible to intended audiences but is misunderstood or turns out to be erroneous, or when a message is received and interpreted as the communicator intended but turns out to be deceptive.

The success of communication matters, of course, for human rights practitioners’ ability to generate accountability for individual cases of human rights violations. It also matters for a value at the core of human rights: pluralism, or the successful communication of a variety of voices. Three types of pluralism are of concern in this chapter. The first is the pluralism of human rights actors vis-à-vis the state or non-state actors they wish to hold to account. The second is the pluralism of individual human rights actors within the human rights world, which, as with all worlds, has hierarchies corresponding to the distribution of power.Footnote 8 The third is the pluralism of access by the subjects and witnesses of violations to the mechanisms of human rights accountability, which, of course, cannot act on a violation without first hearing about it.Footnote 9

The chapter begins by outlining how risk is entwined with communication in the digital age. Rather than considering risk in isolation, we can think of it as manifesting via “risk assemblages,” or dynamic combinations of actors, technologies, contexts, resources, and risk perceptions.Footnote 10 In the subsequent two sections, I detail selected types of risk for human rights communication resulting from new combinations of actors and technologies involved in digital fact-finding and advocacy. For fact-finding, these include the risk of surveillance, which has consequences for participants’ physical security, and the risk of deception, which has consequences for their reputational integrity. For advocacy, these include the risk of mistakes, which can in turn risk reputational integrity, and the risk of miscalculations, which can jeopardize precious resources. In the following section, I explain how this materialized risk combines with risk perceptions to create a silencing double bind. Human rights practitioners may be silenced if they don’t know about risk – and they may silence themselves if they do. This silencing effect is not universal, however, but disproportionately affects human rights practitioners situated in more precarious contexts and with less access to resources.Footnote 11 This has consequences for the three types of pluralism outlined above. The chapter finishes by outlining four ways of loosening the risk double bind: educational, technological, reflexive, and discursive approaches to working with risk.

II Communication, Mediation, and Risk in the Digital Age

As communicators, we all do a number of things to increase the odds that our communications are successful. We establish the identities of our communication partners through clues we gather from their appearance and bearing. We supplement our messages with cues such as facial expressions or emoticons to guide interpretation, and we look for cues from our audiences that they have heard and understood us.Footnote 12 We gather information about our interlocutors’ context – the time and place in which they are communicating – and supplement our messages with information about our own contexts (often referred to as metadata). We adjust our production and reception of content to these clues, cues, and contextual information. Still, even with all of these aids, communication can be unsuccessful, and this risk is exacerbated by the mediation of communication over ICTs.

Mediation is the extension of communication across time and/or space using technology. The closer we are in time and space to our communication partners, the easier it tends to be for us to establish their identities, observe and provide cues, and understand context. Easiest of all is face-to-face communication. By introducing “temporal and spatial distances,” mediation makes all of this more difficult, as we are no longer in the same environment.Footnote 13 It is not, however, just this distance that increases the risk of unsuccessful communication, but also the introduction of intermediaries. These intermediaries are not neutral, but rather introduce new technical features and new actors with new motives, as well as new ways for existing actors to intervene with communications.

The technical features of ICTs can diminish, augment, or alter the clues, cues, and contextual metadata associated with a communication. Furthermore, the complexity of these technical features may make it difficult to understand just what has happened. For example, many social media user profiles allow people to communicate with pseudonyms or assumed identities. Twitter’s character limit squeezes nuance out of tweets, though users have introduced the use of hashtags as an abbreviated interpretation cue. In another example, YouTube automatically dates videos according to the day it is in California at the time of upload, no matter where the upload took place. This metadata is widely misunderstood and has contributed to disputes about the veracity of videos.Footnote 14

A significant proportion of new actors behind ICTs are commercial, governed by profit motives. These motives can shape technical features, like Facebook’s “like” and “share” buttons, which are designed to keep eyeballs on timelines peppered with advertisements. The motives of commercial communication platforms may not necessarily align with the motives of communicators. As discussed further below, this is particularly evident in the algorithms that determine visibility on social media and thus who is seen by whom. These algorithms may make certain communications either more or less visible than their producers intended. The phenomenon of commercial intermediaries controlling public visibility is nothing new – think of the gatekeeping role of mainstream news organizations. What is new is the lack of transparency and accountability when visibility decisions are made by a black-box algorithm instead of a human journalist.Footnote 15 Just as the technical complexity of ICTs obscures these algorithms and the commercial motives underpinning them, it also hides third-party actors. These include political actors who have a vested interest in human rights communication. The market for digital surveillance is thriving, and hardware and software that allow us to communicate over time and space also create opportunities for eavesdropping. In sum, at the same time as communicators using ICTs usually can glean less about their interlocutors and eavesdroppers than in a face-to-face situation, they must also know more about intermediary technologies that are both complex and opaque.Footnote 16 Mediation thereby increases the risk of unsuccessful communication and its attendant consequences.

Alongside many other professional worlds of communication, human rights practitioners are considering and adopting new ICTs. This use of ICTs and the mediation they engender supplements other forms of communication, creating a new “interaction mix” characterized by renewal, as fresh technologies proliferate and slightly stale ones become obsolete.Footnote 17 In terms of human rights fact-finding, this new mix has been described as enabling a new generation of methodologies.Footnote 18 Traditionally, the gold standard of fact-finding has been the face-to-face interview between civilian witnesses and human rights practitioners. Often facilitated by trusted networks cultivated over time, the interview allows for the co-production of information between the witness and the practitioner. This witness testimony and the accompanying analysis done by human rights practitioners are the cornerstones of the weighty, precisely worded, and highly documented orthodox human rights report.Footnote 19 These reports, in turn, underpin human rights advocacy, which practitioners traditionally – though not exclusively – communicated to targets via the mainstream media.Footnote 20

Human rights communication has therefore always been mediated, whether information is passed through a trusted network of witnesses or shaped to attract the attention of journalists covering human rights violations.Footnote 21 It has also always entailed risk. One only has to dip into the multitude of reports on the conditions of human rights practice to see this – or to consider practitioners’ risk-mitigation tactics, ranging from security training to robust and transparent methodologies to publicity strategies.Footnote 22 But the new mix of human rights fact-finding and advocacy in the digital age has brought about new risk assemblages shaped by technologies, actors, contexts, resources, and risk perceptions.Footnote 23 Over the next three sections of this chapter, I outline elements of these new risk assemblages and explain how they can hinder successful communication, with implications for the pluralism of human rights communication.

III Digital Fact-finding and Communication Risk

Human rights practitioners have adopted ICTs for fact-finding in a variety of ways, including using high-technology information sources like satellite images, drone videos, big data, and statistics as well as open source social media content.Footnote 24 Given our concern with communication, here I focus on practitioners’ use of digital information that documents human rights violations and has been produced and transmitted by civilian witnesses – “civilian” in contrast with professional to highlight their inexpert status, and “witness” as someone who is purposively communicating experienced or observed suffering.Footnote 25 Civilian witnesses can be spontaneous or solicited.Footnote 26 In the digital age, spontaneous witnesses might use their smartphones to document violations that they then share with broader audiences via social media or messaging apps; sometimes this information is gathered, curated, and connected to human rights NGOs by networks of activists. Solicited witnesses may be answering a human rights NGO’s open call for information made via a digital crowdsourcing project or a digital reporting application.

Digital information from civilian witnesses affords human rights practitioners a number of fact-finding advantages. First, the images and video civilian witnesses produce can provide much more detailed evidence than witness interviews that rely on memory.Footnote 27 Second, consulting civilian witnesses can tap wells of knowledge, particularly expertise relating to local contexts unfamiliar to foreign practitioners. Third, a wider incorporation of civilians via ICTs can fire up public enthusiasm about human rights and thus receptivity to advocacy.Footnote 28 Fourth, and most important for our concern with pluralism, these new sources can support the variety and volume of voices speaking and being heard on human rights. They supplement interviewing’s traditional co-production of information between witnesses and practitioners with both the more autonomous production of spontaneous digital witnesses and new forms of co-production via solicited digital witnesses.Footnote 29 If these witnesses are situated in closed-country contexts or rapidly unfolding events, they might otherwise be inaccessible to human rights practitioners.Footnote 30 Indeed, fact-finding in a number of recent cases has hinged on evidence documented digitally by civilian witnesses. For example, Amnesty International’s research into a 2017 shooting at an Australian refugee detention center in Papua New Guinea used refugees’ photos and videos to challenge both governments’ official version of events, which was that Papua New Guinea Defence Force soldiers fired bullets into the air rather than into the center.Footnote 31

These opportunities are all made possible by ICTs’ mediation of communication over time and place. Of course, this mediation, and the intermediaries it requires, also introduces risk. Below, I outline two possible manifestations of communication risk and their consequences arising from the introduction of new technologies into fact-finding, associated new commercial actors, and new opportunities for existing actors to interfere with communications. The first is the risk of surveillance, in which the communication is audible to unintended recipients and generates concomitant risk for the physical security of civilian witnesses and human rights practitioners. The second is the risk of deception, in which the producer of a digital communication engineers the recipient’s misinterpretation of that communication. Misinterpretation creates follow-on risk to the reputational integrity of human rights practitioners and their NGOs. These are familiar categories of risk in the human rights domain but manifested, as explained below, in new ways. Both are made possible by the technical complexity of mediating ICTs, which allows eavesdroppers to hide and deceivers to manipulate metadata.

A Surveillance and Physical Security

Surveillance, understood broadly as monitoring information about others for purposes including management and control, is a risk that civilian witnesses and human rights practitioners have always faced.Footnote 32 Surveillance of their identities, networks, and activities is a key tactic deployed by state adversaries in a “cat-and-mouse” game over truth-claims.Footnote 33 Human rights practitioners who pioneered the use of ICTs may have had a momentary advantage in this battle by using these technologies to transmit information quickly and widely. Many state actors, however, have caught up quickly and even surpassed human rights actors in their strategic use of ICTs. The surveillance opportunities ICTs afford center on a metadata paradox. ICTs can both reveal and conceal communication metadata; the first facilitates mass surveillance, while the second facilitates spyware.

ICTs are built to collect metadata on their users, often without users understanding just how significant their data trails are. Many ICT companies routinely collect users’ metadata for reasons ranging from marketing to legal compliance.Footnote 34 This profit-driven surveillance produces information about communications that also meets the surveillance imperatives of states. The US National Security Agency, for example, infamously has a bulk surveillance program that collects telecommunications metadata. Activists worry that this program has set a standard for other governments in terms of the permissible level of spying on their citizenries – as exemplified by the Egyptian government’s 2014 request for tenders for a mass social media surveillance system.Footnote 35 In addition to its implications for the rights to privacy and freedom of opinion and expression, this form of surveillance is a particular concern for individuals communicating information critical of retaliatory states.Footnote 36 Even if the content of these communications remains private, metadata can reveal connections between civilian witnesses and human rights practitioners and, through social network analysis, identify individuals as human rights practitioners.Footnote 37

While mass surveillance depends on ICTs’ revelation of communication metadata, spyware depends on its obfuscation, afforded by ICTs’ complexity. Spyware hides in victims’ communications equipment to track and share information about their activities.Footnote 38 In order to get spyware into target devices in the first place, victims must be deceived into installing it. This often happens through a wolf-in-sheep’s-clothing tactic called social engineering, where messages containing spyware are disguised through the manipulation of metadata and content. For example, a human rights practitioner in the United Arab Emirates received unsolicited text messages containing a link that appeared to document evidence of prison torture. Had he clicked on the link, this practitioner’s iPhone would have been infected with commercial spyware priced at around $1 million – an indication that a powerful actor was behind the attack.Footnote 39 In another case, a Mexican human rights practitioner received a text message purporting to share news about the investigation into the 2014 disappearance of forty-three students. He fell for it, clicking on the link and infecting his phone with malware believed to have been sold to the Mexican government by an Israeli cyberwarfare company.Footnote 40

Digital security risk turning into physical security risk is unfortunately becoming more and more common for human rights practitioners and civilian witnesses.Footnote 41 If surveillance makes fact-finding communication audible to an unintended audience, its participants may not be aware this has happened until they experience related harassment and attacks. Security risk may spread through practitioners’ and witnesses’ networks, which are rendered visible by smartphone contacts and social media friends and followers lists. Furthermore, the mediation of digital fact-finding over time and space can make it difficult for practitioners who have learned of a threat to locate and warn civilian witnesses.Footnote 42 Human rights practitioners can and do use security tools – such as technologies supporting encryption, anonymity, and the detection of spyware – to counteract the corporate/state surveillance nexus. These technologies are threatened, however, by laws curtailing their use.Footnote 43 Furthermore, powerful discourses, such as “nothing to hide, nothing to fear,” which have been propagated by state actors and picked up by the media, align the use of these technologies with criminality and threats to national security.Footnote 44

B Deception and Reputational Integrity

Human rights practitioners’ use of digital information from civilian witnesses generates another category of risk: susceptibility to misinterpretation through deception. By dint of their accusations of violations, human rights practitioners often engage in battles over truth-claims with their adversaries. Though the manipulation of truth-claims with an intent to deceive has always been a feature of these battles, human rights practitioners may be more exposed to them in the digital age for several reasons. First, ICTs afford a greater number and variety of sources of information, many of whom are outside of the trusted networks that human rights organizations traditionally consult. Deceptive actors can camouflage themselves among this broader pool of sources. Second, unlike in a traditional face-to-face interview, human rights practitioners using spontaneous or solicited digital information from civilian witnesses are not present at the moment of production. As such, they cannot rely on their direct perceptions of identity clues, communication cues, and contexts to verify civilian witnesses’ accounts.Footnote 45 Instead, they must use digitally mediated content and metadata as a starting point, which can be distorted and manipulated. Third, this information is often in image or video format that appears to be amateur. This lends it an aura of authenticity – rooted, perhaps, in a “seeing is believing” epistemology – that may belie manipulation.Footnote 46

Deception through truth-claims manipulation can be divided into at least three categories: outright staging of content, doctoring of content, and doctoring of metadata.Footnote 47 Staging of content involves packaging fakery as fact, as with the viral YouTube video “SYRIA! SYRIAN HERO BOY rescue girl in shootout.” This video, which claimed to document children dodging bullets while running through a dusty Syrian street, was actually a cinematographic project by a Norwegian director that was filmed in Malta.Footnote 48 Doctored content, in turn, uses real rather than staged content but relies on digital editing tools such as Photoshop to alter the images. For example, one human rights practitioner received images via WhatsApp from a source who claimed that they were evidence of torture during detention. These included a picture of a person who seemed, at first glance, to have a bruised face. Additional investigation, however, revealed that this was a highly edited version of an older picture involving changes to its color balance to create the illusion of bruises.Footnote 49

Human rights practitioners report that it is the last of these three forms of deception – the doctoring of metadata – that is by far the most prevalent.Footnote 50 This involves scraping videos or images from one context and repackaging them as evidence of violations in another context. Examples include reposting YouTube videos with new descriptions, as in the case of one video depicting the water cannoning of a man shackled to a tree while other men watch and laugh. This video appeared on YouTube multiple times with at least three different sets of metadata entered in the video description. One version claimed to depict Venezuelan armed forces assailing a student, another stated that it was Colombian special forces and a farmer, and a third portrayed the scene as Mexican police and a member of a civil defense group.Footnote 51

Though some instances of deception may be malevolent, other instances may be backed by the best of intentions. For example, civilian witnesses may use images from one event to illustrate another, similar event that was not recorded. Nevertheless, using any kind of manipulated information as evidence creates a follow-on risk to the reputations of human rights practitioners and their organizations. For these, credibility is a fundamental asset, not only for the persuasiveness of their advocacy, but also for garnering donations and volunteers, influencing policy-making, and motivating mobilization.Footnote 52 Credibility is also a human rights organization’s Achilles’ heel, as it can be damaged in an instant with the publication of truth-claims that others convincingly expose as false.Footnote 53 Though the verification of information has always been a cornerstone of human rights work as a truth-claim profession, information mediated by ICTs is challenging established verification practices. This is not only because of the new sources and formats of information ICTs enable, but also because verifying digital information requires expertise that, though increasingly standardized, is still emergent.Footnote 54

IV Digital Advocacy and Communication Risk

As with fact-finding, human rights practitioners are incorporating ICTs into their advocacy strategies, venturing far beyond websites into formats including apps, livestreaming, and virtual reality. Because human rights practitioners are paying particular attention to mainstream social media platforms to supplement their traditional advocacy practices, I focus on that medium here.Footnote 55 Practitioners are communicating advocacy messages via social media to directly target policy-makers, either publicly or via private messages, and to attract the attention of the mainstream media.Footnote 56 They are also using social media to mobilize publics for a variety of reasons, including fundraising, creating visibility for an issue, and building networks between publics and subjects of violations in a show of global solidarity.Footnote 57 Many NGOs undertake advocacy over social media through institutional accounts operated by individuals. Though dedicated communications professionals operate these accounts at some organizations, at others the arrangement is more ad hoc, undertaken by existing staff according to interest or availability.

The use of social media affords human rights practitioners a number of advocacy advantages. It can allow them to amplify messages and reach advocacy targets without depending on the mainstream media, whose human rights coverage may be circumscribed by commercial imperatives, censorship, and norms of newsworthiness.Footnote 58 The range of communication formats supported by social media enables development of new and captivating ways to represent human rights information, such as data visualization.Footnote 59 Additionally, the quantification metrics built into social media platforms, such as numbers of likes and shares, allow human rights practitioners to track engagement with their messages.Footnote 60 They can then incorporate these numbers into their campaigns targeted at policy-makers to quantify public support for their advocacy aims.Footnote 61 A wide variety of human rights advocacy communications over social media exists, such as the 2013 Thunderclap campaign created by EDUCA, an NGO based in Oaxaca, Mexico, to raise awareness about ongoing human rights violations there. Thunderclap is a digital platform that allows users to coordinate their supporters’ automatic participation in a onetime, synchronized mass social media posting of a particular message.Footnote 62 EDUCA surpassed its goal of 100 supporters, and its Thunderclap – timed to coincide with the October 23 UN Universal Periodic Review of Mexico’s human rights record – reached more than 58,000 people via social media.Footnote 63

Again, however, the advantages of social media for advocacy are accompanied by risk, and here I detail two types of communication risk and their consequences. Both stem from the introduction of new technologies into advocacy, which in turn introduces new actors with new motives. Human rights practitioners are accustomed to considering the motives of intermediaries and their intended audiences in shaping their advocacy strategies. For example, they cater to the “media logic” of mainstream media outlets, tailoring the tone and theme of their content as well as building their identities as credible sources to meet journalists’ exigencies.Footnote 64 The use of social media intermediaries, however, requires them to shape advocacy messages in light of new “social media logics.”Footnote 65 These are also commercially driven motives, manifested in new technical features. Like journalists and journalism, these technical features can be inscrutable to human rights practitioners and incompatible with human rights advocacy – but in different ways.Footnote 66 Conducting advocacy via these intermediaries thus introduces new facets to existing risk. This risk includes audibility to unintended audiences, which I refer to as mistakes that can have reputational consequences. The second variety of risk addressed below is inaudibility to intended audiences, or advocacy miscalculations that waste resources.

A Mistakes and Reputational Integrity

An advocacy-related mistake involves something happening that the communication’s producer does not wish to happen. Social media’s facilitation of mediation to publics, in combination with technical features that both speed up and obscure the dynamics of this mediation, introduce new ways of making mistakes. Analog means of communicating with publics had areas of friction, such as the effort required to set up a press conference.Footnote 67 This friction, no doubt, was frustrating during crises in need of immediate response, but it also allowed room for reflexivity and proofing. Digital communication to publics, in contrast, requires only the click of a button. As such, the pace of communication is much faster on social media, as is the pressure to produce at speed. Proofing becomes the friction, and there may not always be time for this to be done as thoroughly as one would like. Furthermore, the technical complexity of social media can make proofing difficult to do. This is particularly the case with respect to ensuring that the right communication is audible to the right audience, as audiences are both blurred and obscured by social media. Mistakes can thus be about erroneous content, but they can also involve the transmission of private information to publics or of information intended for one “imagined audience” or communication context to another.Footnote 68 The consequences of these mistakes are also caught up with mediation, as ICTs allow endless possibilities of repetition and amplification over time and space.Footnote 69

Here, I develop the example of individual practitioners managing multiple Twitter accounts, each with its own profile and audience. Rather than involving an error in the advocacy itself, the associated mistake results from having social media open as an advocacy channel. Twitter’s phone apps easily allow users to switch between accounts, requiring nothing more than holding down the profile icon in the iPhone version. Of course, this also means it is easy to slip between accounts erroneously or forgetfully. When one account is personal and one is institutional, this can create some sticky situations.

One such situation arose in response to a 2014 tweet by Amnesty International about the police shooting in Ferguson, Missouri: “US can’t tell other countries to improve their records on policing and peaceful assembly if it won’t clean up its own human rights record.” Six minutes later, the Center for Strategic and International Studies (CSIS), a major public policy think tank, replied, “Your work has saved far fewer lives than American interventions. So, suck it.” CSIS scrambled to quickly explain the tweet as the work of an intern who had access to the CSIS Twitter account but thought he was logged into his personal account instead when he wrote the message. In the context of a flurry of media stories, CSIS’s senior vice president of external relations described himself and his colleagues as “distressed,” and CSIS quickly sent out an apology tweet to Amnesty. Amnesty followed this by tweeting: “.@CSIS and @amnesty have kissed and made up. Now back to defending human rights!”Footnote 70

Though this example is relatively lighthearted, more serious mistakes can have more serious consequences. One human rights practitioner told me about a mistake made on his organization’s Facebook feed when an image of a private meeting was erroneously published. A furious phone call from an important participating organization ensued, creating what the practitioner described as “a terror effect within the organization” about using social media. At the time of our interview, the resulting policy at this NGO was that every social media post made on the institutional account must first be approved by the executive director.

Serious mistakes can jeopardize an organization’s reputational integrity, particularly with respect to credibility and professionalism. The relative permanence of information published on social media, as well as the unpredictability of its circulation, means a mistake cannot be undone but must instead be overcome. Repairing a damaged reputation, which may involve performing credibility over time and rebuilding social capital, can divert precious resources from human rights NGOs’ core aims.Footnote 71 Even if the mistake is quickly forgiven, it can – as Amnesty’s last tweet above highlights – detract from the message and work of the organization. Because of the risk of mistakes that accompanies the use of social media, adopting this technology can result in slower and more resource-intensive practices than expectations might suggest.

B Miscalculation and Resources

A communication miscalculation means that one’s message is inaudible to one’s intended audience. Of course, the risk always exists that one’s audience either does not hear or does not listen to the message. This is exacerbated by mediation, not only because distance makes it more difficult to perceive audience cues about attention, but also because the intermediary may do things to the message to make it less audible. In the case of social media, this includes evaluating messages automatically with timeline algorithms to determine how visible they should be, and to whom.

Human rights practitioners are in good company with respect to not knowing exactly how these algorithms make decisions about message visibility. The algorithms that govern social media timeline visibility are considered proprietary trade secrets, and these algorithms in turn may be governed by deep learning, in which the algorithm adapts autonomously based on the information to which it is applied.Footnote 72 Furthermore, these algorithms may have thousands of moving parts that are updated weekly or even daily.Footnote 73 Deciphering these algorithms – which are black boxes to just about everybody, even possibly to those who design them – is a far cry from building a trusting relationship with a journalist.Footnote 74

Practitioners do know that these algorithms prevent organizations from reaching all of their fans or followers with their posts. “Organic,” or unpaid, reach may be only 10 percent of a potential audience, and only a small proportion of those reached will engage with the post by liking, sharing, or clicking on a link.Footnote 75 Facebook does shed some light on how this organic reach is determined, stating in its support materials for nonprofits that the post’s timing and its relevance to particular audience members matter.Footnote 76 Twitter reveals that it ranks a tweet for relevance on a number of criteria, including how much user interaction it has already generated and how much past interaction exists between the producer and the potential recipient; in other words, visibility returns to the already visible and to the already networked.Footnote 77 Still, ambiguity remains for the organic visibility of individual posts. Greater certainty is available, however – at a price: mainstream social media platforms allow users to buy access to larger and targeted audiences. Social media advocacy is therefore a “free-to-play, pay-to-win game.”Footnote 78

Human rights practitioners encounter further elements of social media logic that generate communication risk. One is social media platforms’ community standards, which outline the grounds for removal of content that might alienate users. Graphic images and videos fall into this category. The problem for human rights advocacy as well as fact-finding is that the documentation of certain categories of violations necessarily involves depictions of violence – though practitioners think through the ethics of such representations very carefully.Footnote 79 Like the determination of timeline visibility, content moderation is an opaque decision-making process.Footnote 80 Practitioners know that whether or not a graphic video or image stays on social media can depend on a number of factors, including how it is explained by whoever posts it (Facebook allows graphic images and videos to stay up if they are “in the public interest,” but not if they are “for sadistic pleasure”), if it is reported by another user, what the content moderator employed by the platform to review content decides, and – as recently happened with the video livestreamed by Diamond Reynolds immediately after police shot her boyfriend – even “technical glitches.”Footnote 81

A third way in which social media logics can introduce advocacy miscalculations is the content culture they cultivate by rewarding certain types of content with visibility – a culture that contrasts sharply with the traditional registers of human rights advocacy.Footnote 82 Facebook, for example, counsels nonprofits that “formal language can feel out of place” and that “placing blame … typically doesn’t lead to high engagement.”Footnote 83 It may also be that certain types of human rights and certain types of victims are more aligned than others with the logics of social media virality, which is co-constructed by the predilections of algorithms and networked humans.Footnote 84 This was the topic of much public contemplation following the 2015 circulation of an image of three-year-old Syrian refugee Alan Kurdi’s body washed up on a Turkish beach. Many critically attributed this image’s viral spread to Alan’s resemblance to a Western child, and thus his relatability to Western social media users.Footnote 85 Furthermore, the competition for audience attention on social media has fueled the rise of “clickbait” headlines, which feature a “curiosity gap.” These headlines give away just enough to pique someone’s attention, but require that person to click on a link to get the full story.Footnote 86 An interviewee from a human rights NGO that works with migrants and refugees joked about why this popular format is not an option for her organization’s advocacy practices: “We are not going to be like, you know, ‘This man got to the border, and you would never believe what happened next!’ You can’t do that, because it makes you sound … your credibility is gone. So we don’t do that.” The content culture that is rewarded on social media, then, may also be at odds with what the target audiences of human rights advocacy want to hear from practitioners – if the audience even pays attention to social media advocacy in the first place.Footnote 87

Using social media allows human rights practitioners to directly address advocacy targets, but whether those targets hear or listen to those advocacy messages is often an open question. The risk of such advocacy miscalculation generates follow-on risks to an NGO’s resources. These include wasted time, since maintaining a social media presence – including designing content, building and interacting with networks, and developing advertising strategies – demands significant person-hours. This is also a waste of money, as is targeted advertising that falls on deaf ears. Social media’s relative novelty has meant a steep learning curve for human rights practitioners, and risk to advocacy communications can be diminished with expertise. At the same time, however, mastery remains somewhat of a mirage, due not only to the inaccessible element of social media logics, but also to the ICT sector’s state of permanent renewal. Users regularly encounter new platforms as well as new features within the platforms they use, which appear seemingly overnight as tweaks to commercially driven systems designed to hold our attention.

So far, I have outlined the communication risk posed by digital fact-finding and advocacy related to new technologies and new actors; in the next section, I put these findings into conversation with contexts, resources, and risk discourses to show how risk’s silencing effect is not universal, but rather can map onto existing inequalities.

V Risk Assemblages, Pluralism, and Inequality

Returning to the three types of pluralism introduced earlier in the chapter, it is clear that the manifested forms of risk outlined above have silencing effects on the first category – the pluralism of the human rights world vis-à-vis the world of power it aims to hold to account. New mediating technologies, with commercially driven technical features that complicate communication, fuel new communication cultures and allow new spaces for adversaries to intervene. Surveillance, through its consequences for physical security, can stop human rights practitioners from speaking. The susceptibility of practitioners to deception and mistakes, with the repercussions for reputations, may deafen advocacy targets to their communications. Advocacy miscalculations may prevent advocacy targets from hearing those communications at all. In order to understand the effects of communication risk on the other types of pluralism, however, we must further develop our understanding of these new risk assemblages. We must also think about context, resources, and risk discourses.

As materialized risks are always embodied, an individual practitioner’s context and resources matter in understanding how risk impacts the second type of pluralism, namely pluralism within the human rights world. Context – or individuals’ “social risk positions” ranging from their political environments to their positions within relevant social hierarchies – influences exposure to risk.Footnote 88 In turn, the resources individuals have at hand influence their ability to mitigate risk. Key here is the resource of expertise, such as digital literacy about computer security, knowledge of digital verification practices, and facility with social media. Also relevant are the resources that can be used to secure expertise, including money, of course, but also the social capital and reputations that can connect practitioners to expertise and convince experts to share it. The same resources can be used to secure physical and digital safeguards. The upshot is that risk curtails pluralism within the human rights world by silencing practitioners unequally.

Inequalities in contexts and resources intersect with the types of risk enumerated above in a variety of ways. The risk of surveillance depends greatly on the proclivities of a practitioner’s political opponents for purchasing surveillance technologies and enacting pro-surveillance legislation. It also depends on the practitioner’s networks, whose resistance to surveillance is only as strong as their weakest links; one member falling prey to malware can unwittingly expose all her communication partners.Footnote 89 Security literacy is crucial. As a practitioner at an organization that trains human rights reporters on digital security once told me, “A lot of them don’t know that Facebook is where a lot of people who would target human rights defenders go shopping.” Security literacy is expensive, in terms of money and time, and it is daunting; therefore, it is more accessible to some than to others.Footnote 90

Deception via the manipulation of truth-claims is also a risk that human rights practitioners experience differently. Like surveillance, this risk is conditional on the political context, since some governments are particularly inclined to engage in information wars. Associated reputational risks are not isolated, but rather may have repercussions for a practitioner’s networks. This is because human rights organizations build their credibility in part through networks of association with credible peers; one organization’s loss of credibility allows opponents to tar its network with the same brush.Footnote 91 Some organizations can weather a hit on their credibility better than others. As human rights organizations also build credibility through performance over time, a more well-established NGO would have more reputational capital to counterbalance an instance of susceptibility to deception or a mediation-related mistake.Footnote 92

The risk of advocacy mistakes and miscalculations can be mitigated by human rights organizations’ in-house social media expertise and consequently the money required to acquire this expertise. Funds also allow human rights organizations to buy visibility for their social media communications through targeted advertisements. Those with fewer resources to dedicate to social media advocacy are, unfortunately, more likely to waste resources by engaging in this practice. This is evident in the results of a recent study, which found that, of 257 sampled human rights NGOs, the richest 10 percent had 92 percent of the group’s total Twitter followers, 90 percent of their views on YouTube, and 81 percent of their likes on Facebook. The study also found that social media advocacy does not seem to help NGOs set the agenda in the mainstream media – further evidence that unsuccessful digital communication can curtail the greater pluralism that using ICTs could bring, both within the human rights world and vis-à-vis the world of power.Footnote 93

A major purpose of the first and second forms of pluralism is to support the third form: the pluralism of civilian access to human rights mechanisms. Civilians cannot access accountability without their voices – their accounts – being heard. If the NGOs representing them are silenced, they too may be silenced. So, communication risk restricts civilian access to the mechanism of human rights unequally as well. As this effect maps onto context and resource distributions, this means that civilians in more precarious contexts with relatively few resources – in other words, those who might most need human rights mechanisms – are more likely to be silenced. The networked nature of human rights NGOs, which are characterized by solidarity, information exchange, and international communication, goes some way to counteract this effect, as another organization may be able to pick up the communication chain.Footnote 94 Still, while ICTs do create human rights communication channels where none existed before, we must be alert to the possibility that they do not level inequalities of audibility, but rather extend them.

So far, this chapter has looked at materialized risk, but risk perception is just as important for understanding the human rights practitioner’s lived experience of risk.Footnote 95 It is also just as important for understanding how the risk accompanying use can impact the pluralizing potential of ICTs. As evident from interviews with some human rights practitioners, in which they qualified their view of ICTs with words such as “terrified” and “scary,” knowing about risk can be distracting and even debilitating. The more complex the risk assemblage, the stronger this effect, as it is more difficult to understand and predict the risk. This knowing but not knowing exactly brings its own anxieties.Footnote 96

Risk perception is not necessarily accurate, in part because risks are hard to estimate and because the idea of them can be overwhelming. Furthermore, as explained below, the specter of risk associated with a practice may have been conjured on purpose to prevent people from undertaking that practice; it may be a discourse deployed in the pursuit of power.Footnote 97 A full exploration of risk perception, which is outside the confines of this chapter, would consider the practices individuals adopt in anticipation of these risks and would investigate how these practices affect pluralism. For example, some human rights practitioners are renouncing digital communication methods for a return to analog.Footnote 98 Some are slow to adopt digital information from civilian witnesses for fact-finding.Footnote 99 As mentioned above, practitioners introduce protracted review systems for social media communications and pay for the visibility of their social media messages, and thus the success of their digital communications depends on the resources of time and money. Risk perception can also silence, and unevenly so. Furthermore, as practitioners weigh up pluralism versus security in deciding whether or not to communicate digitally, erroneous risk perception can swing the balance too far to security.

What we have here, then, is a risk double bind – risk is bad for pluralism if you know about it, and it is bad for pluralism if you don’t. If the latter, human rights practitioners are more likely to fall prey to communication risk. If the former, risk perception can prevent them from communicating digitally in the first place. This creates its own follow-on risk, like missing vital pieces of evidence or being dismissed as Luddites in the context of a broader pro-technology zeitgeist that has enthused donors. Though this double bind can make practitioners feel caught between paralysis and propulsion, it is not impervious to resistance. Next, I offer four approaches to loosening the silencing risk double bind.

VI Loosening the Silencing Risk Double Bind

The silencing risk double bind, constructed in part by commercial and political actors, threatens to squeeze the pluralism potential from human rights practitioners’ adoption of ICTs. Political adversaries of the human rights world benefit directly from this silencing effect. The commercial actors of social media companies profit from human rights practitioners shaping their communications to social media logics, which can have silencing consequences. Human rights practitioners – as well as all those involved in the human rights and technology space, such as scholars, technologists, and donors – can, however, counteract these forces. In outlining four approaches to loosening the risk double bind, this chapter moves beyond the techno-pessimistic enumeration of materialized risk, and its potential contribution to silencing risk perception, toward a techno-pragmatic position. The four approaches, which work best in tandem, support the development and adoption of ICTs for human rights pluralism. The first pair of approaches, involving education and technology, are about mitigating materialized risks, while the second pair, involving reflexivity and discourse, relate to the construction and perception of risk. As human rights practitioners know very well, risk is an unavoidable element of their work; the aim is not to eliminate it, but to work alongside risk without it getting in the way.

A The Educational Approach

Knowing about risk without overblowing it involves understanding the origins of risk as well as mitigation strategies. Education projects for digital literacy – particularly around data literacy, security training, and social media advocacy – are proliferating apace with interest in digital human rights practices. For example, The Engine Room, Benetech, and Amnesty International recently published DatNav: How to Navigate Digital Data for Human Rights Research, which has since been translated into Spanish and Arabic.Footnote 100 Amnesty International’s Citizen Evidence Lab walks practitioners through techniques for verifying digital truth-claims.Footnote 101 New Tactics in Human Rights hosts online conversations about using social media for advocacy, among other topics.Footnote 102 These educational resources are targeted at human rights practitioners, who share them through their networks of knowledge exchange.

That said, education has its limits. Expertise in digital fact-finding and advocacy can mitigate the materialization of some risk, but to the extent that the use of ICTs remains inscrutable – due, for example, to black-box algorithms or to an ever-shifting security terrain – some risk always remains. Furthermore, it is difficult to inform diffuse arrays of civilian witnesses about risk, which puts the burden of responsibility for digital security more squarely on the shoulders of human rights practitioners.Footnote 103

B The Technological Approach

The technological pathway out of the risk double bind involves using ICTs built to address the risks engendered by digital communications. If human rights practitioners adopt these technologies to communicate with civilian witnesses, they go some way toward protecting those witnesses as well. For example, human rights practitioners are increasingly communicating via messaging applications, like WhatsApp, that are relatively impervious to surveillance. Many are consulting Security in-a-Box, developed by Front Line Defenders and the Tactical Technology Collective to introduce communities of users to digital security tools in seventeen languages.Footnote 104

Of course, introducing technical fixes to digital communication risk may instead compound this risk, even if the technical fixes are done with the best of intentions. This is because the adoption of new technologies escalates the technological “arms race” between human rights practitioners and adversary state actors.Footnote 105 A case in point was the 2014 arrest for treason of human rights bloggers in Ethiopia, in which their use of Security in-a-Box was presented as evidence against them.Footnote 106 This potential for the inadvertent escalation of risks is one reason why the latter two approaches, the reflexive and the discursive, are vital complements to the educational and technological approaches.

C The Reflexive Approach

Reflexive and discursive approaches call for critical perspectives on risk that unsettle taken-for-granted interpretations and practices. Reflexivity requires considering one’s own role in making and perceiving risk, as well as the ways in which broader power relations shape risk assemblages.Footnote 107 It is all too easy to think about risk being an individual problem, when actually it is a socially constructed phenomenon.Footnote 108 For example, human rights practitioners are told to strengthen passwords, adopt encryption, and be vigilant about social engineering – or risk being hacked or surveilled. This is despite the fact that these risks emerge from the confluence of a multitude of commercial, criminal, and political actors.Footnote 109 Our tendency to individualize risk is to the benefit of these powerful actors. A broader view of risk that sheds light on these actors’ roles redresses deniability and supports accountability in the determination of risk responsibility.Footnote 110 Furthermore, this view helps to safeguard individuals by painting a more comprehensive picture of risk and how and why it occurs.

Reflexivity about one’s own roles and responsibilities in constructing risk is also important. Asking individuals to participate in a human rights technology project is also asking them to take on risk. This risk may be difficult to anticipate, in part because the context and resources where technologies are developed – usually the Global North – do not match the context in which the technology is being deployed. For example, digital security experts convinced one NGO to change its operating system, but the new operating system was not compatible with the NGO’s printer. The NGO’s employees had to bring files on memory sticks to printers at local Internet cafés. The memory sticks got lost in the process, which created a greater security risk than the original risk the operating system change was implemented to address.Footnote 111

Practitioners in this sector must also be reflexive concerning their assumptions about civilian witnesses’ participation in digital human rights fact-finding and these witnesses’ knowledge of associated risk. Some civilian witnesses are driven to document human rights violations by the somewhat idealized goal of speaking truth to power. For others, however, bearing witness may instead be a life-or-death matter, a matter of local or global politics, an exercise of identity, a function of resources – or simply a response to digital solicitations by human rights practitioners.Footnote 112 Some are accidental witnesses, while others are activists. Tailoring risk assessment to individual risk profiles and providing support for risk-bearing may require difficult, on-the-ground work that outweighs the mediation benefits of ICTs. Furthermore, practitioners may consider that soliciting digital information from civilian witnesses is too risky for certain contexts. Again, reflexivity is important, as practitioners need to consider whether they are or should be making silencing decisions on behalf of civilian witnesses. While an accidental witness may not have had an opportunity to think through risk, an activist witness’s drive to digitally communicate documentation of violations may be underpinned by extremely sophisticated risk calculations.

D The Discursive Approach

The discursive pathway out of the risk double bind also involves focusing on the social construction of risk – this time by being aware of the possibility that actors communicate risks in order to control the behavior of others. In other words, risk perception can be a discourse used to protect or pursue power. The discursive approach to loosening the risk double bind involves identifying those who might benefit from risk discourses in order to assess how well perception corresponds to materialized risk.Footnote 113 For example, state actors may visibly surveil or punish digital activists not only to quell those individuals, but also to create a broader chilling effect on online human rights reporting.Footnote 114 As Amnesty International’s secretary general stated following the UK government’s 2015 admission that its agencies had been intercepting Amnesty’s communications, “How can we be expected to carry out our crucial work around the world if human rights defenders and victims of abuses can now credibly believe their confidential correspondence with us is likely to end up in the hands of governments?”Footnote 115

These discourses don’t just serve political purposes; they can have commercial benefits, too. For example, tales of criminal and terrorist use of the “dark web” may arouse public suspicion about human rights practitioners’ use of it in fact-finding, but they also sell newspapers.Footnote 116 Risk perceptions also create profit for the security sector, a major industry in which digital security is a growth niche.Footnote 117 The discursive approach to risk perceptions is particularly important, since, given the technical complexity of ICTs, most human rights practitioners must rely on external expertise to assess actual risk and appropriate responses.Footnote 118 Circling back to the education approach, incorporating this external knowledge must involve interrogating its motives.

VII Conclusion

Techno-optimism has surfaced in the human rights world, as in many others, based in part on the perceived benefits of ICTs for the pluralism of human rights communication. These benefits have been realized in a number of cases, but the application of ICTs has also materialized risk. As human rights practitioners consider whether and how to incorporate ICTs into their practices, this chapter has sought to outline some types of risk they may face and associated consequences for human rights pluralism. This risk, I argue, is a product of ICTs’ affordance for mediation, or communication across time and place. This mediation, and the technical features it requires, alters the identity clues, interpretation cues, and contextual information communicators draw upon in order to increase the likelihood that their communication is successful.Footnote 119

Furthermore, the use of ICTs introduces new intermediary actors to the human rights communication chain, and the technical complexity of ICTs makes these actors and their impact on communication more difficult to identify and assess.Footnote 120 Of particular note here are new commercial actors with profit motives. To be sure, human rights reporters have interacted with commercial motives before in their communication practices, such as in considering the marketability of newsworthiness decisions.Footnote 121 Never before, however, have commercial actors been so influential over and yet so hidden in mediation.Footnote 122 Cases in point are the commercial-political surveillance nexus, the lucrative gray market for spyware, and the proprietary, revenue-maximizing algorithms of social media platforms. Incorporating ICTs into human rights fact-finding and advocacy contributes to new risk assemblages for human rights practitioners.

The types of risk outlined here are by no means the only ones that ICTs introduce or exacerbate for the human rights world. Others include the risk to human rights practitioners of secondary trauma brought on by exposure to images and videos of violations, or the retraumatization of individuals featured in advocacy material, particularly if the material is re-mediated and re-mixed.Footnote 123 The types of risk detailed here, however, have particular consequences for human rights pluralism. In digital fact-finding, human rights practitioners face surveillance risk that can imperil their physical security and deception risk that can jeopardize their reputational integrity. In digital advocacy, they encounter the risk of mistakes that have negative repercussions for reputations, as well as the risk that miscalculation poses for their resources. Some of these materialized risks and their repercussions silence human rights practitioners and civilian witnesses, while others deafen intended audiences to human rights communication. The perception of these risks can also be silencing, leading to a risk double bind in which both knowing and not knowing about risk can curtail human rights communication.

Acknowledging the silencing risk double bind throws into relief the importance of thinking about risk not in isolation, but rather as socially constructed. These social contexts produce values and connect individuals that could end up on opposite sides of a risk trade-off. In deciding whether or not to speak in the face of risk, human rights practitioners are choosing between the value of pluralism and the value of security. In so doing, they are also choosing between types of follow-on risk: the risk of physical harm and harm to reputations and resources if they choose pluralism, and the risk of ongoing human rights violations if they choose security. This means they are also making choices between risk populations.

The silencing risk double bind can feel unstoppable, part of the “juggernaut” of rapidly advancing technological change – with its associated complexities, inscrutable interconnections, and risk – that characterizes contemporary societies.Footnote 124 Yet, silencing is not inevitable. This chapter proposes four approaches to loosening the risk double bind: the educational and technological, which can limit materialized risk, and the reflexive and discursive, which can stay the construction of risk and erroneous risk perceptions. For practitioners, technologists, donors, and scholars, these approaches are useful heuristics for assessing risk. These heuristics also support human rights practices that allow successful digital communication to coexist with risk rather than be dictated by it.

The net impact of ICTs on the pluralism of the human rights world vis-à-vis the world of power it aims to hold to account is difficult to determine. What we can establish, however, is that materialized and perceived risk curtail pluralism unevenly within the human rights world. This dampening effect is stronger for human rights practitioners in more perilous political and social contexts and with less expertise and associated resources. It is not only particular organizations that are more affected by the materialized and perceived risk of digital human rights fact-finding and advocacy, but also the particular populations and particular human rights that they represent. For a world fundamentally concerned with pluralism, this momentum toward the use of technology creates risk for human rights enforcement in general, as it may be reinforcing inequalities around who speaks and gets heard on which human rights.

Footnotes

6 The Utility of User-Generated Content in Human Rights Investigations

1 C. Silverman and R. Tsubaki, “The Opportunity for Using Open Source Information and User-Generated Content in Investigative Work,” in C. Silverman (ed.), Verification Handbook for Investigative Reporting (Maastricht, the Netherlands: European Journalism Centre, 2015), http://verificationhandbook.com/book2/chapter1.php.

2 S. Padania et al., Cameras Everywhere: Current Challenges and Opportunities at the Intersection of Human Rights, Video, and Technology (Brooklyn, NY: WITNESS, 2011), p. 16.

4 A. Azoulay, The Civil Contract of Photography (Brooklyn, NY: Zone Books, 2008).

5 A. Whiting, “The ICC Prosecutor’s New Draft Strategic Plan,” Just Security, www.justsecurity.org/24808/icc-prosecutors-draft-strategic-plan.

6 K. Matheson, “Video as Evidence: To be evidence, what does video need?,” New Tactics in Human Rights, July 20, 2014, www.newtactics.org/comment/7427#comment-7427.

7 WITNESS offers an excellent guide that explains the elements of linkage evidence and also provides tips to activists and ordinary citizens about filming video in a way that provides information about responsibility for particular actions. WITNESS, “Proving Responsibility: Filming Linkage and Notice Evidence,” https://library.witness.org/product/video-as-evidence-proving-responsibility-filming-linkage-and-notice-evidence.

8 S. Cohen, States of Denial: Knowing about Atrocities and Suffering (Cambridge: Polity Press, 2001), p. 7.

9 S. Cohen, “Government Responses to Human Rights Reports: Claims, Denials, and Counterclaims” (1996) 18 Human Rights Quarterly 517–43 at 523.

10 Amnesty International and Forensic Architecture, “Rafah: Black Friday Report,” Methodology section, July 24, 2015, https://blackfriday.amnesty.org/methodology.php.

12 C. Silverman, Lies, Damn Lies and Viral Content: How News Websites Spread (and Debunk) Online Rumors, Unverified Claims and Misinformation (New York: Tow Center for Digital Journalism and Columbia Journalism School, 2015), http://towcenter.org/research/lies-damn-lies-and-viral-content; see also G. King, J. Pan, and M. E. Roberts, “How the Chinese Government Fabricates Social Media Posts for Strategic Distraction, not Engaged Argument,” Working Paper, American Political Science Review, http://j.mp/1Txxiz1.

13 Carter Center, Syria Countrywide Conflict Report #4 (Atlanta, GA: Carter Center, 2014), p. 3, www.cartercenter.org/resources/pdfs/peace/conflict_resolution/syria-conflict/NationwideUpdate-Sept-18-2014.pdf.

15 M. Weaver, “How Brown Moses exposed Syrian arms trafficking from his front room,” The Guardian, March 21, 2013, www.theguardian.com/world/2013/mar/21/frontroom-blogger-analyses-weapons-syria-frontline.

16 E. Higgins, “Clear Evidence of DIY Barrel Bombs Being Used by the Syrian Air Force,” Brown Moses Blog, October 27, 2012, http://brown-moses.blogspot.co.uk/2012/10/clear-evidence-of-diy-barrel-bombs.html; E. Higgins, “Cluster Bomb Usage Rose Significantly Across Syria,” Brown Moses Blog, October 12, 2012, http://brown-moses.blogspot.co.uk/2012/10/cluster-bomb-usage-rises-significantly.html.

17 E. Higgins, “Evidence of Multiple Foreign Weapons Systems Smuggled to the Syrian Opposition in Daraa,” Brown Moses Blog, January 25, 2013, http://brown-moses.blogspot.co.uk/2013/01/evidence-of-multiple-foreign-weapon.html; E. Higgins, “Weapons from the Former Yugoslavia Spread through Syria’s War,” At War: Notes from the Front Line, The New York Times, February 25, 2013, http://atwar.blogs.nytimes.com/2013/02/25/weapons-from-the-former-yugoslavia-spread-through-syrias-war/?_r=0; C. J. Chivers and E. Schmitt, “Saudis Step Up Help for Rebels in Syria With Croatian Arms,” The New York Times, February 26, 2013, p. A1.

18 C. McNaboe, e-mail to author, June 3, 2015.

19 Carter Center, Syria Countrywide Conflict Report #4, p. 23.

21 Carter Center, Regional Conflict Report: Ras al-Ain (Atlanta: Carter Center, 2013), p. 2, www.cartercenter.org/resources/pdfs/peace/conflict_resolution/syria-conflict/Ras-al-AinReport.pdf.

22 Carter Center, Syria Countrywide Conflict Report #1 (Atlanta: Carter Center, 2013), p. 4, www.cartercenter.org/resources/pdfs/peace/conflict_resolution/syria-conflict/nationwidereport-aug-20-2013.pdf

23 Carter Center, Syria Countrywide Conflict Report #2 (Atlanta, GA: Carter Center, 2013), p. 5, www.cartercenter.org/resources/pdfs/peace/conflict_resolution/syria-conflict/nationwideupdate_nov-20-2013.pdf.

24 Carter Center, Syria Countrywide Conflict Report #3, p. 6.

26 Quoted in M. Czuperski et al., Hiding in Plain Sight: Putin’s War in Ukraine (Washington, DC: Atlantic Council, 2015), p. 7.

28 Footnote Ibid., p. 3.

29 Footnote Ibid., p. 8.

30 Footnote Ibid., p. 13.

33 The report provides documentation of Bellingcat’s methods on pp. 18–19 and 28–31.

34 Footnote Ibid., p. 17.

35 Footnote Ibid., pp. 25–27.

36 Bellingcat Investigation Team, “Origin of the Separatists’ Buk: A Bellingcat Investigation,” Bellingcat, November 8, 2014, www.bellingcat.com/news/uk-and-europe/2014/11/08/origin-of-the-separatists-buk-a-bellingcat-investigation; D. Romein, “Is This the Launch Site of the Missile That Shot Down Flight Mh17? A Look at the Claims and Evidence,” Bellingcat, January 27, 2015, www.bellingcat.com/news/uk-and-europe/2015/01/27/is-this-the-launch-site-of-the-missile-that-shot-down-flight-mh17.

37 Bellingcat Investigation Team, “Forensic Analysis of Satellite Images Released by the Russian Ministry of Defense,” Bellingcat, May 31, 2015, www.bellingcat.com/wp-content/uploads/2015/05/Forensic_analysis_of_satellite_images_EN.pdf.

38 Footnote Ibid., p. 18.

39 Footnote Ibid., p. 42.

40 Amnesty International, “Our Job Is to Shoot, Slaughter and Kill”: Boko Haram’s Reign of Terror in North East Nigeria (London: Amnesty International, 2015).

41 Footnote Ibid., p. 60.

43 Footnote Ibid., pp. 60–61.

44 Amnesty International, Stars on Their Shoulders. Blood on Their Hands: War Crimes Committed by the Nigerian Military (London: Amnesty International, 2015).

45 Footnote Ibid., p. 4.

49 Footnote Ibid., p. 37.

50 Footnote Ibid., p. 40.

51 Footnote Ibid., p. 7.

52 Footnote Ibid., p. 36.

53 Footnote Ibid., pp. 37–38.

54 Footnote Ibid., p. 46.

55 These videos were first made public by Amnesty International in August 2014. Amnesty International, “Gruesome Footage Implicates Nigerian Military in Atrocities,” www.amnestyusa.org/news/press-releases/gruesome-footage-implicates-nigerian-military-in-atrocities.

56 Amnesty, Stars on Their Shoulders. Blood on Their Hands, p. 40.

57 S. Dubberley, E. Griffin, and H. Mert Bal, “Making Secondary Trauma a Primary Issue: A Study of Eyewitness Media and Vicarious Trauma on the Digital Frontline,” Eyewitness Media Hub, June 15, 2015, http://eyewitnessmediahub.com/research/vicarious-trauma.

58 For a sampling, search YouTube for “Syria death,” among many other combinations of terms. There are many videos with more than one million hits.

59 J. Aronson, “Preserving Human Rights Media for Justice, Accountability, and Historical Clarification” (2017) 11 Genocide Studies and Prevention: An International Journal 8299.

60 “Guatemala Trials before the National Courts of Guatemala,” International Justice Monitor, www.ijmonitor.org/category/guatemala-trials; “Operation Condor: Former Argentine junta leader jailed,” BBC News, May 28, 2016, www.bbc.com/news/world-latin-america-36403909; “La Chambre Africaine D’assises Prononce La Perpetuite Contre Hissein Habre,” Chambres africaines extraordinaires, May 30, 2016, www.chambresafricaines.org/index.php/le-coin-des-medias/communiqu%C3%A9-de-presse/638-la-chambre-africaine-d%E2%80%99assises-prononce-la-perpetuite-contre-hissein-habre.html.

7 Big Data Analytics and Human Rights Privacy Considerations in Context

* Zachary Gold, JD, a research analyst at the Data & Society Research Institute, contributed research to an early version of this work that was presented as a conference paper.

1 See Universal Declaration of Human Rights, December 10, 1948, U.N. G.A. Res. 217 A (III), art. 12; International Covenant on Civil and Political Rights, December 16, 1966, S. Treaty Doc. No. 95–20, 6 I.L.M. 368 (1967), 999 U.N.T.S. 171, art. 17.

2 M. Land et al. demonstrate how the impact of emerging information and communication technologies can be examined from a rights-based framework and international human rights law. M. K. Land et al., “#ICT4HR: Information and Communication Technologies for Human Rights,” World Bank Institute, Nordic Trust Fund, Open Development Technology Alliance, and ICT4Gov, November 2012, https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2178484.

3 H. Nissenbaum, Privacy in Context: Technology, Policy, and the Integrity of Social Life (Palo Alto, CA: Stanford University Press, 2010).

4 United Nations Human Rights Council, “Summary of the Human Rights Council panel discussion on the right to privacy in the digital age,” December 19, 2014, www.un.org/en/ga/search/view_doc.asp?symbol=A/HRC/28/39.

5 H. Nissenbaum, “Privacy as Contextual Integrity” (2004) 79(1) Washington Law Review 119–58.

6 Nissenbaum, Privacy in Context.

8 See Chapter 6. See also J. Aronson, “Mobile Phones, Social Media, and Big Data in Human Rights Fact-Finding: Possibilities, Challenges, and Limitations,” in P. Alston and S. Knuckey (eds.), The Transformation of Human Rights Fact-Finding (Oxford: Oxford University Press, 2015).

9 In Chapter 12, G. Alex Sinha discusses the need to determine when revealing information to others constitutes a waiver of the human right to privacy.

10 “The ‘mosaic theory’ describes a basic precept of intelligence gathering: Disparate items of information, though individually of limited or no utility to their possessor, can take on added significance when combined with other items of information.” D. E. Pozen, “The Mosaic Theory, National Security, and the Freedom of Information Act” (Note) (2005) 115 Yale Law Journal 628–79.

11 Alston and Knuckey (eds.), The Transformation of Human Rights Fact-Finding.

12 P. Ball, “The Bigness of Big Data,” in Alston and Knuckey (eds.), The Transformation of Human Rights Fact-Finding, p. 428.

13 P. Meier, Digital Humanitarians (Boca Raton, FL: CRC Press, 2015); P. Meier, “Big (Crisis) Data: Humanitarian Fact-Finding with Advanced Computing,” in Alston and Knuckey (eds.), The Transformation of Human Rights Fact-Finding.

14 B. Campbell and S. Blair, “How ‘big data’ could help stop the spread of Ebola,” PRI’s The World, October 24, 2014, www.pri.org/stories/2014-10-24/how-big-data-could-help-stop-spread-ebola.

15 S. McDonald, “Ebola: A Big Data Disaster,” The Centre for Internet and Society, March 1, 2016, http://cis-india.org/papers/ebola-a-big-data-disaster.

16 UN Human Rights Office of the High Commissioner, Monitoring Economic, Social and Cultural Rights, HR/P/PT/7/Rev. 1, 2011, www.ohchr.org/Documents/Publications/Chapter20-48pp.pdf.

17 UN Human Rights Office of the High Commissioner, Training Manual on Human Rights Monitoring: The Monitoring Function, March 21, 1999, www.ohchr.org/Documents/Publications/training7part59en.pdf, § A–C, G, J–K.

18 Ontario Human Rights Commission, “Count me in! Collecting human rights based data – Summary (fact sheet),” www.ohrc.on.ca/en/count-me-collecting-human-rights-based-data-summary-fact-sheet.

19 K. Crawford, “The Anxieties of Big Data,” The New Inquiry, May 30, 2014, https://thenewinquiry.com/the-anxieties-of-big-data/.

20 UN Human Rights Council, Report of the Special Rapporteur on the Right to Privacy, Joseph Cannataci, U.N. Doc. A/HRC/34/60 (February 24, 2017), p. 9 (“Cannataci Report”).

21 S. Davies and J. Youde (eds.), The Politics of Surveillance and Response to Disease Outbreaks: The New Frontier for States and Non-State Actors (London: Routledge, 2016), p. 3.

22 A. Fairchild, R. Bayer, J. Colgrove, Searching Eyes: Privacy, the State, and Disease Surveillance in America (Berkeley: University of California, 2007).

23 Cannataci Report, p. 9.

24 See P. Ohm, “Broken Promises of Privacy: Responding to the Surprising Failure of Anonymization” (2009) 57 UCLA Law Review 1701–88; A. Narayanan and E. Felton, “No silver bullet: De-identification still doesn’t work,” July 29, 2014, http://randomwalker.info/publications/no-silver-bullet-de-identification.pdf.

25 F. Greenwood, et al., “The Signal Code: A Human Rights Approach to Information during Crisis,” Harvard Humanitarian Initiative, January 2017, http://hhi.harvard.edu/publications/signal-code-human-rights-approach-information-during-crisis.

26 UN Global Pulse, “Privacy and Data Protection Principles,” www.unglobalpulse.org/privacy-and-data-protection; see also UN Global Pulse, “Workshop on ICT4D Principle 8: Address Privacy & Security in Development Programs,” www.unglobalpulse.org/events/workshop-ict4d-principle-8-address-privacy-security-development-programs.

27 UN Global Pulse, Unpublished report on data privacy and data security for ICT4D (2015).

28 Necessary and Proportionate, “International Principles on the Application of Human Rights to Communications Surveillance,” May 2014, https://necessaryandproportionate.org/.

29 General Data Protection Regulation (Regulation [EU] 2016/679), 2–17, http://data.consilium.europa.eu/doc/document/ST-5419-2016-INIT/en/pdf.

30 See Lea Shaver’s chapter (Chapter 2) for an argument that the right to science also requires such an assessment.

31 See Enrique Piracés’s chapter (Chapter 13).

8 The Challenging Power of Data Visualization for Human Rights Advocacy

1 “Talk to the Newsroom: Graphics Director Steve Duenes,” The New York Times, February 28, 2008, www.nytimes.com/2008/02/25/business/media/25asktheeditors.html.

2 M. Friendly and D. J. Denis, “Milestones in the history of thematic cartography, statistical graphics, and data visualization,” August 24, 2009, www.math.yorku.ca/SCS/Gallery/milestone/milestone.pdf.

3 K. Rall, “Data Visualization for Human Rights Advocacy” (2016) 8 Journal of Human Rights Practice 171–97.

4 M. Lima, The Book of Trees: Visualizing Branches of Knowledge (New York: Princeton Architectural Press, 2014).

5 F. Cajori, A History of Mathematical Notations (Mineola, NY: Dover Publications, 1928), pp. 218, 21–29, 43–44; see also History of Cartography (Chicago: University of Chicago Press, 1997), vol. 1, pp. 107–147, vol. 2, pp. 96–127.

6 J. C. Scott, Seeing Like a State: How Certain Schemes to Improve the Human Condition Have Failed (New Haven, CT: Yale University Press, 1998), pp. 252.

7 M. Friendly, “The Golden Age of Statistical Graphics” (2008) 23 Institute of Mathematical Statistics in Statistical Science 502–35.

8 I. Spence and H. Wainer, “Who Was Playfair” (1997) 10(1) Chance 3537.

9 W. Playfair, The Commercial and Political Atlas: Representing, by Means of Stained Copper-Plate Charts, the Progress of the Commerce, Revenues, Expenditure and Debts of England during the Whole of the Eighteenth Century, 3rd ed. (New York: Cambridge University Press, 2005).

10 W. Playfair, Statistical Breviary; Shewing, on a Principle Entirely New, the Resources of Every State and Kingdom in Europe (London: Wallis, 1801). For more on Playfair’s development of the pie chart, see I. Spence, “No Humble Pie: The Origins and Usage of a Statistical Chart” (2005) 30 Journal of Educational and Behavioral Statistics 353–68.

11 M. Friendly, “The Golden Age of Statistical Graphics” (2008) 23 Institute of Mathematical Statistics in Statistical Science 502–35.

12 P. A. Kidwell, “American Scientists and Calculating Machines – From Novelty to Commonplace” (1990) 12 IEEE Annals of the History of Computing 3140.

13 I. D. Hill, “Statistical Society of London – Royal Statistical Society: The First 100 Years: 1834–1934” (1984) 147 Journal of the Royal Statistical Society. Series A (General) 130–39 at 131, 137.

14 See Friendly, “The Golden Age” at 502.

16 For an extensive discussion of Dr. Snow’s advances in graphical reasoning, see E. R. Tufte, Visual Explanations: Images and Quantities, Evidence and Narrative (Cheshire, CT: Graphics Press, 1997) pp. 2737.

17 The map also shows notable outliers: there were no cholera deaths reported at the neighboring brewery, where, presumably, there were other things to drink. See E. Tufte, The Visual Display of Quantitative Information, 2nd ed. (Cheshire, CT: Graphics Press, 2001), p. 30.

18 I. B. Cohen, “Florence Nightingale” (1984) 250 Scientific American 128–37.

19 S. Rogers, “Florence Nightingale, datajournalist: Information has always been beautiful,” The Guardian, August 13, 2010, www.theguardian.com/news/datablog/2010/aug/13/florence-nightingale-graphics.

20 Cohen, “Florence Nightingale,” at 132.

22 V. Chevallier, “Notice nécrologique sur M. Minard, inspecteur général des ponts et chaussées, en retraite” (1871) 2 Annales des ponts et chaussées 122 (translated by D. Finley at www.edwardtufte.com/tufte/minard-obit).

23 For an extensive discussion of C. Minard’s “space-time-story graphics,” see Tufte, Visual Display, pp. 40–41.

24 L. D. Cook, “Converging to a National Lynching Database: Recent Developments and the Way Forward” (2012) 45 Historical Methods: A Journal of Quantitative and Interdisciplinary History 5563 at 56.

26 M. Langford and S. Fukuda-Parr, “The Turn to Metrics” (2012) 30 Nordic Journal of Human Rights, 222–38 at 222.

27 M. L. Satterthwaite and J. Simeone, “A Conceptual Roadmap for Social Science Methods in Human Rights Fact-Finding,” in P. Alston and S. Knuckey (eds.), The Transformation of Human Rights Fact-Finding (New York: Oxford University Press, 2016), p. 323.

28 P. Gready, “Introduction – Responsibility to the Story” (2010) 2 Journal of Human Rights Practice 177–90 at 178.

29 For an examination of several such data sets, see M. L. Satterthwaite, “Coding Personal Integrity Rights: Assessing Standards-Based Measures Against Human Rights Law and Practice” (2016) 48 New York University Journal of International Law and Politics 513–79.

30 O. F. Norheim and S. Gloppen, “Litigating for Medicines: How Can We Assess Impact on Health Outcomes?,” in A. E. Yamin and S. Gloppen (eds.), Litigating Health Rights: Can Courts Bring More Justice to Health? (Cambridge, MA: Harvard University Press, 2011), pp. 306–07.

31 See, e.g., P. Slovic and D. Zionts, “Can International Law Stop Genocide When Our Moral Intuitions Fail Us?,” in R. Goodman, J. Derek, and A. K. Woods (eds.), Understanding Social Action, Promoting Human Rights (New York: Oxford University Press, 2012), pp. 100–28.

32 Globally, the number of mobile cell phone subscriptions reached 97 per 100 people in 2014. Serious inequality remains, with only 56 per 100 in low-income, 96 per 100 in middle-income, and 122 per 100 in high-income countries. See International Telecommunication Union, World Telecommunication/ICT Development Report and database, “Mobile cellular subscriptions (per 100 people,” http://data.worldbank.org/indicator/IT.CEL.SETS.P2.

33 As Mark Latonero writes, “The very same tools, techniques, and processes in the collection and use of big data can be employed to both violate and protect human rights.” M. Latonero, “Big Data Analytics and Human Rights: Privacy Considerations in Context,” Chapter 7.

34 S. E. Merry, The Seductions of Quantification: Measuring Human Rights, Gender Violence, and Sex Trafficking (Chicago and London: University of Chicago Press, 2016), p. 3.

35 M. Power, The Audit Society: Rituals of Verification (Oxford: Oxford University Press, 1997), pp. 45.

36 For a discussion of this dynamic, see M. L. Satterthwaite and A. Rosga, “The Trust in Indicators: Measuring Human Rights” (2009) 27 Berkeley Journal of International Law 253315 at 253.

37 K. E. Davis, B. Kingsbury, and S. E. Merry, “Indicators as a Technology of Global Governance” (2012) 46(1) Law & Society Review 71104 at 73–74.

38 E. Witchel, “Getting Away with Murder: CPJ’s 2015 Global Impunity Index spotlights countries where journalists are slain and the killers go free,” Committee to Protect Journalists, October 8, 2015.

39 Office of the United Nations High Commissioner for Human Rights, Human Rights Indicators, a Guide to Measurement and Implementation (New York: United Nations, 2012).

40 D. de Felice, “Business and Human Rights Indicators to Measure the Corporate Responsibility to Respect: Challenges and Opportunities” (2015) 37 Human Rights Quarterly 511–55.

41 For development processes, see T. Landman et al., Indicators for Human Rights Based Approaches to Development in UNDP Programming: A User’s Guide (New York: United Nations Development Programme, 2006). For a discussion of rights-based indicators for humanitarian aid, see M. L. Satterthwaite, “Indicators in Crisis: Rights-Based Humanitarian Indicators in Post-Earthquake Haiti” (2012) 43(4) New York University Journal of International Law and Politics 865965.

42 S. E. Merry, The Seductions of Quantification: Measuring Human Rights, Gender Violence, and Sex Trafficking (Chicago and London: University of Chicago Press, 2016), p. 16.

44 M. Price and P. Ball, “Big Data, Selection Bias, and the Statistical Patterns of Mortality in Conflict” (2014) 34 (1) The SAIS Review of International Affairs 920.

45 For instance, see P. Heijmans, “Myanmar criticised for excluding Rohingyas from Census,” Al Jazeera, May 29, 2015, www.aljazeera.com/news/2015/05/myanmar-criticised-excluding-rohingyas-census-150529045829329.html.

46 Brian Root lists other biases affecting human rights data collection in “Numbers Are Only Human,” in Alston and Knuckey (eds.), The Transformation of Human Rights Fact-Finding, p. 363.

47 S. McInerney-Lankford and H. Sano, Human Rights Indicators in Development: An Introduction (Washington, DC: World Bank Publications, 2010), pp. 1617.

48 See, for instance, P. Ball et al., “The Bosnian Book of the Dead: Assessment of the Database,” Households in Conflict Network Research Design (2007), and M. Price et al., “Full Updated Statistical Analysis of Documentation of Killings in the Syrian Arab Republic,” Human Rights Data Analysis Group (2013).

49 P. Ball, et al. “How Many Peruvians Have Died? An Estimate of the Total Number of Victims Killed or Disappeared in the Armed Internal Conflict between 1980 and 2000,” American Association for the Advancement of Science, August 28, 2003.

50 L. Kurgan, “Representation and the Necessity of Interpretation,” Close Up at a Distance: Mapping, Technology, and Politics (New York: Zone Books, 2013), p. 35.

51 A. G. Ferguson, “Policing Predictive Policing” (forthcoming) 94 Washington University Law Review.

52 K. Lum and W. Isaac, “To predict and serve?” (2016) 13 Significance 1419 at 16.

53 On June 29, 2016, the ACLU filed a lawsuit on behalf of a group of academic researchers, computer scientists, and journalists challenging the US Computer Fraud and Abuse Act. The law creates significant barriers to research and testing necessary to uncover discrimination in computer algorithms. See E. Bhandari and R. Goodman, “ACLU Challenges Computer Crimes Law That Is Thwarting Research on Discrimination Online,” American Civil Liberties Union, June 29, 2016, www.aclu.org/blog/free-future/aclu-challenges-computer-crimes-law-thwarting-research-discrimination-online.

54 See, e.g., C. Sandvig et al., “Auditing Algorithms: Research Methods for Detecting Discrimination on Internet Platforms,” presentation at the 64th Annual Meeting of the International Communication Association, Seattle, WA, May 24–26, 2014.

55 A. Rusbriger, “The Snowden Leaks and the Public,” The New York Review of Books, November 21, 2013, www.nybooks.com/articles/2013/11/21/snowden-leaks-and-public/.

56 C. Koettl, “Chapter 7: Using UGC in human rights and war crimes investigations,” Verification Handbook for Investigative Reporting (Maastricht, the Netherlands: European Journalism Centre, 2015), pp. 4649.

57 WITNESS, “Is This for Real? How InformaCam Improves Verification of Mobile Media Files,” WITNESS, January 15, 2013, https://blog.witness.org/2013/01/how-informacam-improves-verification-of-mobile-media-files/.

58 See, for instance, US Department of Health and Human Services Office for Human Research Protections, Informed Consent Checklist (1998), www.hhs.gov/ohrp/regulations-and-policy/guidance/checklists/index.html, and UK Information Commissioner’s Office, Code of Practice on Anonymisation (2012), https://ico.org.uk/for-organisations/guide-to-data-protection/anonymisation/.

59 A lively debate is currently under way concerning revisions to the Common Rule and federal regulations, with an especially relevant part of that debate centered on ethics in big data research. See J. Metcalf, E. F. Keller, and D. Boyd, “Perspectives on Big Data, Ethics, and Society,” The Council for Big Data, Ethics, and Society, May 23, 2016, http://bdes.datasociety.net/wp-content/uploads/2016/05/Perspectives-on-Big-Data.pdf; J. Metcalf and K. Crawford, “Where Are Human Subjects in Big Data Research? The Emerging Ethics Divide” (2016) Big Data & Society 1–14.

60 See “Responsible Data Forum: Visualization,” Responsible Data Forum, January 15, 2016, https://responsibledata.io/forums/data-visualization/; see also F. Neuhaus and T. Webmoor, “Agile Ethics for Massified Research and Visualization. Information” (2013) 15 Communication & Society pp. 4365.

61 For more details about the discussion, see M. Stempeck, “DataViz for good: How to ethically communicate data in a visual manner: #RDFviz,” Microsoft New York, January 20, 2016, https://blogs.microsoft.com/newyork/2016/01/20/dataviz-for-good-how-to-ethically-communicate-data-in-a-visual-manner-rdfviz/.

62 B. Root, “Data Analysis for Human Rights Advocacy,” School of Data, November 23, 2013, https://schoolofdata.org/author/broot/.

63 Human Rights Watch, “Chapter 15: Statistical Analysis of Violations,” in Under Orders: War Crimes in Kosovo (New York: Human Rights Watch, 2001), pp. 345–68.

64 Center for Economic and Social Rights (CESR), “Visualizing Rights: Guatemala Fact Sheet” (2008), www.cesr.org/sites/default/files/Guatemala_Fact_Sheet.pdf.

66 CESR, “Visualizing Rights: Cambodia Fact Sheet” (2009), www.cesr.org/sites/default/files/cambodia_WEB_CESR_FINAL.pdf.

67 A. Corkey, S. Way, and V. Wisniewiki Otero, The OPERA Framework: Assessing Compliance with the Obligation to Fulfill Economic, Social and Cultural Rights (Brooklyn, NY: Center for Economic and Social Rights, 2012), p. 30.

68 A. Tal and B. Wansink, “Blinded with Science: Trivial Graphs and Formulas Increase Ad Persuasiveness and Belief in Product Efficacy” (2014) 25 Public Understanding of Science 117–25.

69 A. V. Pandey et al., “How Deceptive Are Deceptive Visualizations?: An Empirical Analysis of Common Distortion Techniques,” presentation at the 33rd Annual ACM Conference on Human Factors in Computing Systems, Seoul, Republic of Korea, April 18–23, 2015).

70 M. A. Borkin et al., “Beyond Memorability: Visualization Recognition and Recall” (2016) 22 IEEE Transactions on Visualization and Computer Graphics 519–28.

71 B. Root, “Numbers Are Only Human: Lessons for Human Rights Practitioners from the Quantitative Literacy Movement,” in Alston and Knuckey (eds.), The Transformation of Human Rights Fact-Finding (referring to a 196-page report on sexual assault in Washington, DC; roughly five pages of statistical analysis bore the brunt of the negative criticism).

72 Experiments have shown that people perceive position more effectively than area. A line next to another line half its length is more easily understood as double than are circles or squares compared with area doubled. See W. Cleveland and R. McGill, “Graphical Perception: Theory, Experimentation, and Application to the Development of Graphical Methods” (1984) 79 Journal of the American Statistical Association 531–54.

73 Pandey et al., “How Deceptive are Deceptive Visualizations?”

74 J. Fellner et al., Nation Behind Bars: A Human Rights Solution (New York: Human Rights Watch, 2014).

75 P. Overberg and J. Adamy (reporting), L. T. Vo and J. Ma (interactive), A. V. Dam and S. A. Thompson (additional development), “What’s Your Pay Gap?,” The Wall Street Journal, May 17, 2016, http://graphics.wsj.com/gender-pay-gap/.

76 S. A. Perlin, D. Wong, and K. Sexton, “Residential Proximity to Industrial Sources of Air Pollution: Interrelationships among Race, Poverty, and Age” (2001) 51 Journal of the Air & Waste Management Association 406–21.

77 E. Rosten and B. Migliozzi, “What’s Really Warming the World?” Bloomberg, June 24, 2015.

78 E. Felner, “Closing the ‘Escape Hatch’: A Toolkit to Monitor the Progressive Realization of Economic, Social, and Cultural Rights” (2009) 1 Journal of Human Rights Practice 402–35.

79 CESR, “Visualizing Rights: Guatemala Fact Sheet” (2008), www.cesr.org/sites/default/files/Guatemala_Fact_Sheet.pdf.

80 CESR, “Visualizing Rights: Egypt Fact Sheet” (2013), www.cesr.org/sites/default/files/Egypt.Factsheet.web_.pdf.

81 M. Langford and S. Fukuda-Parr, “The Turn to Metrics” (2012) 30 Nordic Journal of Human Rights 222–38 at 222.

82 A particularly elaborate interactive timeline is Human Rights Watch, “Failing Darfur, Five Years On,” www.hrw.org/sites/default/files/features/darfur/fiveyearson/timeline.html.

83 Human Rights Watch, “DR Congo: M23 Rebels Committing War Crimes,” September 11, 2012, www.hrw.org/news/2012/09/11/dr-congo-m23-rebels-committing-war-crimes.

84 See map of prisons visited in E. Ashamu, “Prison Is Not for Me”: Arbitrary Detention in South Sudan (New York: Human Rights Watch, 2002).

85 K. Rall et al., “Data Visualization for Human Rights Advocacy” (2016) 8 Journal of Human Rights Practice 171–97 at 179, 183.

86 On GPS, see L. Kurgan, “From Military Surveillance to the Public Sphere,” in Close Up at a Distance: Mapping, Technology, and Politics (New York: Zone Books, 2013), pp. 3940. On satellite imagery, see C. Lavers, “The Origins of High Resolution Civilian Satellite Imaging – Part 1: An Overview,” Directions Magazine, January 13, 2013, www.directionsmag.com/entry/the-origins-of-high-resolution-civilian-satellite-imaging-part-1-an-ov/303374.

87 American Association for the Advancement of Science, Geospatial Technologies Project, www.aaas.org/page/geospatial-technology-projects. The AAAS has played a leading role in developing geospatial analysis as a human rights documentation tool.

88 For example, see Amnesty International and Zimbabwe Lawyers for Human Rights, Zimbabwe – Shattered Lives: The Case of Porta Farm (London: Amnesty International, International Secretariat, 2006).

89 The Committee for Human Rights in North Korea has published a series of reports that rely heavily on analysis of satellite images of prison camps and other sites of abuse in the Democratic People’s Republic of Korea. See HRNK publications at www.hrnk.org/publications/hrnk-publications.php.

90 Combining “near” and “far” is a powerful storytelling technique, mixing testimonies, individual stories, or data points of personal interest to the reader (the near view) with the overview of large-scale trends and data abstraction. This creates an empathetic entry point into the larger story, and locates and contextualizes it. Scott Klein touches on this a bit more in “The Design and Structure of a News Application,” ProPublica, https://github.com/propublica/guides/blob/master/design-structure.md, as does Dominikus Baur in “The superpower of interactive datavis? A micro-macro view!,” Medium, April 13, 2017, https://medium.com/@dominikus/the-superpower-of-interactive-datavis-a-micro-macro-view-4d027e3bdc71.

91 A. Van Woudenberg, Covered in Blood: Ethnically Targeted Violence in Northern DRC (New York: Human Rights Watch, 2003), p. 24.

92 Global Witness, Cambodia’s Family Trees: Illegal Logging and the Stripping of Public Assets by Cambodia’s Elite (London: Global Witness, 2007), pp. 4849.

93 Carter Center, “Carter Center Makes Dynamic Syria Conflict Map Available to Public,” Press Release, March 8, 2016, www.cartercenter.org/news/pr/syria-030916.html.

94 See, for instance, Forensic Architecture’s interactive report “The Killing of Nadeem Nawara and Mohammad Mahmoud Odeh Abu Daher in a Nakba Day Protest Outside of Beitunia on May 1, 2014,” http://beitunia.forensic-architecture.org.

95 J. D. Aronson et al., Video Analytics for Conflict Monitoring and Human Rights Documentation: Technical Report (Pittsburgh: Carnegie Mellon University Center for Human Rights Science, 2015), www.cmu.edu/chrs/documents/ELAMP-Technical-Report.pdf.

96 Rudiment and the Centre for Visual Computing, ARCADE: ARtillery Crater Analysis and Detection Engine, https://rudiment.info/project/arcade/.

97 W. Cleveland and R. McGill, “Graphical Perception: Theory, Experimentation, and Application to the Development of Graphical Methods” (1984) 79 Journal of the American Statistical Association 531–54.

99 D. Carroll, S. Chakraborty, and J. Lazar, “Designing Accessible Visualizations: The Case of Designing a Weather Map for Blind Users,” in C. Stephanidis and M. Antona (eds.), Universal Access in Human-Computer Interaction, Design Methods, Tools, and Interaction Techniques for eInclusion, vol. 8009 of Lecture Notes in Computer Science (Berlin: Springer, 2013), pp. 436–45.

100 T. Bedi, A. Coudouel, and K. Simler (eds.), More than a Pretty Picture: Using Poverty Maps to Design Better Policies and Interventions (Washington, DC: World Bank, 2007).

101 Zuzanna Licko, interview by R. VanderLans, in Emigré, 1990, 15, p. 43.

102 L. A. Allen, “Martyr Bodies in the Media: Human Rights, Aesthetics, and the Politics of Immediation in the Palestinian Intifada” (2009) 36 American Ethnologist 161–80.

103 M. Kende, Global Internet Report 2015: Mobile Evolution and Development of the Internet (Reston, VA: Internet Society, 2015).

104 J. Emerson, “Guns, butter and ballots: Citizens take charge by designing for better government” (2005) January–February Communication Arts 14–23.

105 N. Thomas and M. Martina, “China tightens rules on maps amid territorial disputes,” Reuters, December 16, 2015, www.reuters.com/article/us-china-maps-idUSKBN0TZ1AR20151216.

106 Agencies, New Delhi, “7-year jail, Rs 100 crore fine for wrong depiction of India map,” Times of India, May 5, 2016, timesofindia.indiatimes.com/india/7-year-jail-Rs-100-crore-fine-for-wrong-depiction-of-India-map/articleshow/52133221.cms.

107 C. D’Ignazio, The Detroit Geographic Expedition and Institute: A Case Study in Civic Mapping (Cambridge, MA: MIT Center for Civic Media, 2013).

108 D. Weiner and T. M. Harris, “Community-Integrated GIS for Land Reform in South Africa” (2003) 15 URISA Journal 6173.

109 See Humanitarian OpenStreetMap Team, https://hotosm.org.

110 J. Halliday, “OpenStreetMap without Servers [Part 2]: A peer-to-peer OSM database,” Digital Democracy, June 9, 2016, www.digital-democracy.org/blog/osm-p2p/.

111 S. Bagchi, “Feminist mapping initiative tries to reclaim Delhi, one dot at a time,” FactorDaily, June 21, 2016, https://factordaily.com/mapping-delhi-hidden-pockets/.

112 Watermarks project, “Visualizing Sea Level Rise,” February 2009, www.watermarksproject.org.

113 F. Gonzalez-Torres, Untitled (Portrait of Ross in L.A.), 1991, candies individually wrapped in multicolor cellophane, endless supply, Art Institute of Chicago.

114 Flash is a multimedia software package the runs primarily in web browsers. Flash enables web browsers to stream audio or video, play animation and interactive games, and interact with rich applications. It has been criticized for its poor accessibility and a series of high-profile security vulnerabilities. Released in 2007, the Apple iPhone does not have the ability to run Flash, nor does the Apple iPad, released in 2010. In November 2011, Adobe announced that it would no longer support Flash for mobile browsers. The popularity of Flash has continued to decline and in July 2017, Adobe announced it will end development and distribution of Flash in 2020. See C. Warren, “The Life, Death and Rebirth of Adobe Flash,” Mashable, November 19, 2012, http://mashable.com/2012/11/19/history-of-flash/ and Adobe Corporate Communications, “Flash & the future of interactive content,” July 25, 2017, https://theblog.adobe.com/adobe-flash-update/

115 Amnesty International, “Pakistan: Millions Suffer in Suffer in [sic] Human Rights Free Zone in Northwest Pakistan,” June 10, 2010, www.amnestyusa.org/news/press-releases/pakistan-millions-suffer-in-suffer-in-human-rights-free-zone-in-northwest-pakistan.

116 See this list of GitHub accounts of various news organizations: https://github.com/silva-shih/open-journalism.

117 J. Emerson, “Ten Challenges to the Use of Data Visualization in Human Rights,” Social Design Notes, February 9, 2016, http://backspace.com/notes/2016/02/ten-challenges.php.

9 Risk and the Pluralism of Digital Human Rights Fact-Finding and Advocacy

1 This work was supported by the Economic and Social Research Council (grant no. ES/K009850/1) and the Isaac Newton Trust.

2 I lead a project at the University of Cambridge called “The Whistle,” which is a digital app we are developing to facilitate human rights reporting and verification. See www.thewhistle.org.

3 I draw on my ongoing digital ethnography of human rights practices in the digital age for examples of empirical realities.

4 R. Mansell, “The Life and Times of the Information Society” (2010) 28(2) Prometheus: Critical Studies in Innovation 165–86 at 173.

5 K. Nash, The Political Sociology of Human Rights (Cambridge: Cambridge University Press, 2015).

6 Human rights fact-finding is also used to produce evidence for courts, where the uptake of ICTs is also an important area for inquiry, but beyond the scope of this chapter.

7 M. Madianou, L. Longboan, and J. C. Ong, “Finding a Voice through Humanitarian Technologies? Communication Technologies and Participation in Disaster Recovery” (2015) 9 International Journal of Communication 3020–38 at 3022; A. T. Thrall, D. Stecula, and D. Sweet, “May We Have Your Attention Please? Human-Rights NGOs and the Problem of Global Communication” (2014) 19(2) The International Journal of Press/Politics 135–59 at 137–38.

8 W. Bottero and N. Crossley, “Worlds, Fields and Networks: Becker, Bourdieu and the Structures of Social Relations” (2011) 5(1) Cultural Sociology 99119 at 105.

9 E. McPherson, “Source Credibility as ‘Information Subsidy’: Strategies for Successful NGO Journalism at Mexican Human Rights NGOs” (2016) 15(3) Journal of Human Rights 330–46 at 331–32.

10 D. Lupton, “Digital Risk Society,” in A. Burgess, A. Alemanno, and J. O. Zinn (eds.), The Routledge Handbook of Risk Studies (Abingdon, UK: Routledge, 2016), p. 302.

11 U. Beck, Risk Society: Towards a New Modernity (London: SAGE Publications, 1992), p. 23.

12 J. B. Thompson, The Media and Modernity: A Social Theory of the Media (Stanford, CA: Stanford University Press, 1995), pp. 8385.

13 Footnote Ibid., p. 22.

14 R. Mackey, “Confused by How YouTube Assigns Dates, Russians Cite False Claim on Syria Videos,” The New York Times, August 23, 2013, http://thelede.blogs.nytimes.com/2013/08/23/confused-by-how-youtube-assigns-dates-russians-cite-false-claim-on-syria-videos/.

15 Z. Tufekci, “Algorithmic Harms Beyond Facebook and Google: Emergent Challenges of Computational Agency” (2015) 13 Journal on Telecommunications and High Technology Law: 203–18 at 208–09.

16 E. McPherson, “Social Media and Human Rights Advocacy,” in H. Tumber and S. Waisbord (eds.), The Routledge Companion to Media and Human Rights (London: Routledge, 2017), pp. 281–83.

17 Thompson, The Media and Modernity, p. 87.

18 P. Alston, “Introduction: Third Generation Human Rights Fact-Finding,” in Proceedings of the Annual Meeting (Washington, DC: American Society of International Law, 2013), pp. 6162. For a recent overview of ways that ICTs are being adopted in human rights practice, see E. McPherson, ICTs and Human Rights Practice (Cambridge: University of Cambridge Centre of Governance and Human Rights, 2015).

19 P. Alston and C. Gillespie, “Global Human Rights Monitoring, New Technologies, and the Politics of Information,” European Journal of International Law (2012) 23(4) 1089–123 at 1108–09.

20 M. Powers, “NGO Publicity and Reinforcing Path Dependencies: Explaining the Persistence of Media-Centered Publicity Strategies” (2016) 21(4) The International Journal of Press/Politics 492–94.

21 McPherson, “Source Credibility as ‘Information Subsidy’,” at 333–35.

22 S. Hopgood, Keepers of the Flame: Understanding Amnesty International (Ithaca, NY: Cornell University Press, 2006), pp. 9092; A. M. Nah et al., “A Research Agenda for the Protection of Human Rights Defenders” (2013) 5(3) Journal of Human Rights Practice 401–20 at 413.

23 S. Hankey and D. Ó Clunaigh, “Rethinking Risk and Security of Human Rights Defenders in the Digital Age” (2013) 5(3) Journal of Human Rights Practice 535–47 at 539.

24 Amnesty International, Benetech, and The Engine Room, DatNav: New Guide to Navigate and Integrate Digital Data in Human Rights Research (London: The Engine Room, 2016). See also M. Latonero, “Big Data Analytics and Human Rights: Privacy Considerations in Context,” Chapter 7 in this volume. Open source social media content includes perpetrator propaganda videos. It also includes content originally posted without a witnessing purpose but later repurposed by others, such as the use of a geolocated selfie for corroboration of a military vehicle’s movements because it happens to capture that vehicle driving past in the background.

25 E. McPherson, “Digital Human Rights Reporting by Civilian Witnesses: Surmounting the Verification Barrier,” in R. A. Lind (ed.), Produsing Theory in a Digital World 2.0: The Intersection of Audiences and Production in Contemporary Theory (New York: Peter Lang Publishing, 2015), vol. 2, p. 206; S. Tait, “Bearing Witness, Journalism and Moral Responsibility” (2011) 33(8) Media, Culture & Society 1220–35 at 1221–22.

26 McPherson, ICTs and Human Rights Practice, pp. 14–17.

27 C. Koettl, Citizen Media Research and Verification: An Analytical Framework for Human Rights Practitioners (Cambridge: University of Cambridge Centre of Governance and Human Rights, 2016), p. 7.

28 M. Land et al., #ICT4HR: Information and Communication Technologies for Human Rights (Washington, DC: The World Bank Group, 2012), p. 17; M. Land, “Peer Producing Human Rights” (2009) 46(4) Alberta Law Review 1115–39 at 1120–22.

29 J. Aronson, “The Utility of User-Generated Content in Human Rights Investigations,” Chapter 6 in this volume.

30 Alston and Gillespie, “Global Human Rights Monitoring, New Technologies, and the Politics of Information,” at 1112–13.

31 “In the Firing Line: Shooting at Australia’s Refugee Centre on Manus Island in Papua New Guinea,” Amnesty International, May 14, 2017 www.amnesty.org/en/documents/document/?indexNumber=asa34%2f6171%2f2017&language=en.

32 D. Lyon, Surveillance after Snowden (Cambridge: Polity Press, 2015), p. 3.

33 Hankey and Ó Clunaigh, “Rethinking Risk and Security of Human Rights Defenders in the Digital Age,” at 538.

34 “Metadata,” Privacy International, www.privacyinternational.org/node/53.

35 “Egypt’s plan for mass surveillance of social media an attack on internet privacy and freedom of expression,” Amnesty International, June 4, 2014, www.amnesty.org/en/latest/news/2014/06/egypt-s-attack-internet-privacy-tightens-noose-freedom-expression/; S. Kelly et al., “Tightening the Net: Governments Expand Online Controls,” Freedom House, 2014, https://freedomhouse.org/report/freedom-net/2014/tightening-net-governments.

36 Report of the Special Rapporteur on the Promotion and Protection of the Right to Freedom of Opinion and Expression, David Kaye, U.N. Doc. A/HRC/29/32 (May 22, 2015).

37 Amnesty International, Benetech, and The Engine Room, DatNav, p. 23; S. Bender-de Moll, Potential Human Rights Uses of Network Analysis and Mapping (Washington, DC: AAAS Science and Human Rights Program, 2008), p. 4.

38 M. Schwartz, “Cyberwar for Sale,” The New York Times, January 4, 2017, www.nytimes.com/2017/01/04/magazine/cyberwar-for-sale.html.

39 B. Marczak and J. Scott-Railton, “The Million Dollar Dissident: NSO Group’s iPhone Zero-Days Used against a UAE Human Rights Defender,” The Citizen Lab, August 24, 2016, https://citizenlab.org/2016/08/million-dollar-dissident-iphone-zero-day-nso-group-uae/.

40 A. Ahmed and N. Perlroth, “Using Texts as Lures, Government Spyware Targets Mexican Journalists and Their Families,” The New York Times, June 19, 2017, www.nytimes.com/2017/06/19/world/americas/mexico-spyware-anticrime.html?_r=0.

41 S. Kelly et al., “Silencing the Messenger: Communication Apps Under Pressure,” Freedom House, 2016, https://freedomhouse.org/report/freedom-net/freedom-net-2016.

42 Amnesty International, Benetech, and The Engine Room, DatNav, p. 61.

43 A. Crowe, S. Lee, and M. Verstraete, “Securing Safe Spaces Online: Encryption, Anonymity, and Human Rights,” Privacy International, 2015, www.privacyinternational.org/sites/default/files/Securing%20Safe%20Spaces%20Online_0.pdf.

44 H. Abelson et al., “Keys under Doormats: Mandating Insecurity by Requiring Government Access to All Data and Communications,” (2015) 1(1) Journal of Cybersecurity 6979.

45 Diane F. Orentlicher, “Bearing Witness: The Art and Science of Human Rights Fact-Finding” (1990) 3 Harvard Human Rights Journal 83136 at 114.

46 P. Brown, “It’s Genuine, as Opposed to Manufactured”: A Study of UK News Audiences’ Attitudes towards Eyewitness Media (Oxford: Reuters Institute for the Study of Journalism, 2015), http://reutersinstitute.politics.ox.ac.uk/publication/its-genuine-opposed-manufactured.

47 Amnesty International, Benetech, and The Engine Room, DatNav, p. 35.

48 McPherson, “Digital Human Rights Reporting by Civilian Witnesses,” pp. 193–94.

49 Koettl, Citizen Media Research and Verification, pp. 27–28.

50 Footnote Ibid., p. 16.

51 M. Bair and V. Maglio, “Video Exposes Police Abuse in Venezuela (Or Is It Mexico? Or Colombia?),” WITNESS Blog, February 25, 2014, http://blog.witness.org/2014/02/video-exposes-police-abuse-venezuela-mexico-colombia/.

52 L. D. Brown, Creating Credibility (Sterling, VA: Kumarian Press, 2008), pp. 38; S. Cottle and D. Nolan, “Global Humanitarianism and the Changing Aid-Media Field: Everyone Was Dying for Footage” (2007) 8(6) Journalism Studies 862–88 at 872; M. Gibelman and S. R. Gelman, “A Loss of Credibility: Patterns of Wrongdoing Among Nongovernmental Organizations” (2004) 15(4) Voluntas: International Journal of Voluntary and Nonprofit Organizations 3581 at 372.

53 Koettl, Citizen Media Research and Verification, p. 6.

54 McPherson, “Digital Human Rights Reporting by Civilian Witnesses,” pp. 199–200.

55 “Incorporating Social Media into Your Human Rights Campaigning,” New Tactics in Human Rights, 2013, www.newtactics.org/conversation/incorporating-social-media-your-human-rights-campaigning.

56 Powers, “NGO Publicity and Reinforcing Path Dependencies,” p. 500.

57 McPherson, ICTs and Human Rights Practice, pp. 28–32; R. Stewart, “Amnesty International’s head of comms on why interactive social campaigns could help find a solution to the refugee crisis,” The Drum, February 7, 2017, www.thedrum.com/news/2017/02/07/amnesty-international-s-head-comms-why-interactive-social-campaigns-could-help-find.

58 Alston and Gillespie, “Global Human Rights Monitoring, New Technologies, and the Politics of Information,” pp. 1112–13; E. McPherson, “How Editors Choose Which Human Rights News to Cover: A Case Study of Mexican Newspapers,” in T. A. Borer (ed.), Media, Mobilization, and Human Rights: Mediating Suffering (London: Zed Books, 2012), pp. 96121.

59 J. Emerson et al., “The Challenging Power of Data Visualization for Human Rights Advocacy,” Chapter 8 in this volume.

60 D. Karpf, The MoveOn Effect: The Unexpected Transformation of American Political Advocacy (New York: Oxford University Press, 2012), pp. 3637.

61 E. McPherson, “Advocacy Organizations’ Evaluation of Social Media Information for NGO Journalism: The Evidence and Engagement Models” (2015) 59(1) American Behavioral Scientist 124–48 at 134–39.

62 Thunderclap is free, but the platform does decide whether or not to approve campaigns, and the extent of campaign visibility can depend on users’ purchase of premium plans. “Take your message even further,” Thunderclap, 2017, www.thunderclap.it/pricing.

63 EDUCA, “Thunderclap: TÚ PUEDES EVALUAR A EPN EN DH,” October 23, 2013, www.thunderclap.it/projects/5687-t-puedes-evaluar-a-epn-en-dh.

64 S. Waisbord, “Can NGOs Change the News?” (2011) 5 International Journal of Communication 142–65 at 149–51.

65 J. van Dijck and T. Poell, “Understanding Social Media Logic” (2013) 1(1) Media and Communication 214.

66 McPherson, “Social Media and Human Rights Advocacy.”

67 S. Gregory, “Human Rights Made Visible: New Dimensions to Anonymity, Consent, and Intentionality,” in M. McLagan and Y. McKee (eds.) Sensible Politics: The Visual Culture of Nongovernmental Activism (New York, Cambridge, MA: Zone Books, 2012), p. 552.

68 A. E. Marwick and D. Boyd, “I Tweet Honestly, I Tweet Passionately: Twitter Users, Context Collapse, and the Imagined Audience” (2011) 13(1) New Media & Society 114–33; Thompson, The Media and Modernity, pp. 143–44.

69 Thompson, The Media and Modernity, p. 141.

70 B. James, “Think Tank Apologizes for Intern’s ‘Suck It’ Tweet to Amnesty International,” Talking Points Memo, August 19, 2014, http://talkingpointsmemo.com/livewire/csis-amnesty-international-suck-it-tweet; M. Roth, “Think Tank Blames Intern for Tweet Telling Amnesty International to ‘Suck It’,” MTV News, August 20, 2014, www.mtv.com/news/1904747/csis-intern-amnesty-international/.

71 Cottle and Nolan, “Global Humanitarianism and the Changing Aid-Media Field” at 871–74.

72 N. Koumchatzky and A. Andryeyev, “Using Deep Learning at Scale in Twitter’s Timelines,” Twitter, May 9, 2017, https://blog.twitter.com/engineering/en_us/topics/insights/2017/using-deep-learning-at-scale-in-twitters-timelines.html; Tufekci, “Algorithmic Harms Beyond Facebook and Google.”

74 W. Knight, “The Dark Secret at the Heart of AI,” MIT Technology Review, April 11, 2017, www.technologyreview.com/s/604087/the-dark-secret-at-the-heart-of-ai/; McPherson, “Source Credibility as ‘Information Subsidy’.”

75 M. Collins, “It’s time for charities to stop wasting money on social media,” The Guardian, March 11, 2016, www.theguardian.com/voluntary-sector-network/2016/mar/11/charities-wasting-money-social-media.

76 Facebook, “Measurement & Tracking,” Nonprofits on Facebook, 2017, https://nonprofits.fb.com/topic/measurement-tracking/.

77 Koumchatzky and Andryeyev, “Using Deep Learning at Scale in Twitter’s Timelines.”

78 L. Karch, “Is Social Media a Time-Waster for Nonprofits?” Nonprofit Quarterly, March 17, 2016, https://nonprofitquarterly.org/2016/03/17/is-social-media-a-time-waster-for-nonprofits/.

79 M. Bair, “Navigating the Ethics of Citizen Video: The Case of a Sexual Assault in Egypt” (2014) 19 Arab Media & Society 17; Gregory, “Human Rights Made Visible,” p. 555.

80 S. T. Roberts, “Commercial Content Moderation: Digital Laborers’ Dirty Work,” in S. Umoja Noble and B. M. Tynes (eds.), The Intersectional Internet: Race, Sex, Class, and Culture Online (New York: Peter Lang Publishing, 2016), pp. 148–49.

81 “Community Standards,” Facebook, 2017, www.facebook.com/communitystandards#violence-and-graphic-content; A. Peterson, “Why the Philando Castile police-shooting video disappeared from Facebook – then came back,” The Washington Post, July 7, 2016, www.washingtonpost.com/news/the-switch/wp/2016/07/07/why-facebook-took-down-the-philando-castile-shooting-video-then-put-it-back-up/.

82 McPherson, “Social Media and Human Rights Advocacy,” pp. 281–83.

83 “Grab People’s Attention,” Nonprofits on Facebook, 2016, https://nonprofits.fb.com/topic/grab-peoples-attention.

84 van Dijck and Poell, “Understanding Social Media Logic,” p. 7.

85 See, e.g., C. Homans, “The Boy on the Beach,” The New York Times, September 3, 2015, www.nytimes.com/2015/09/03/magazine/the-boy-on-the-beach.html.

86 D. Thompson, “Upworthy: I Thought This Website Was Crazy, but What Happened Next Changed Everything,” The Atlantic, November 14, 2013, www.theatlantic.com/business/archive/2013/11/upworthy-i-thought-this-website-was-crazy-but-what-happened-next-changed-everything/281472/.

87 Powers, “NGO Publicity and Reinforcing Path Dependencies” at 498.

88 Beck, Risk Society, p. 23.

89 Kelly et al., “Tightening the Net.”

90 Hankey and Ó Clunaigh, “Rethinking Risk and Security of Human Rights Defenders in the Digital Age” at 542.

91 M. Land, “Peer Producing Human Rights,” at 1136; Gibelman and Gelman, “A Loss of Credibility” at 376.

92 McPherson, “Source Credibility as ‘Information Subsidy’” at 337.

93 Thrall, Stecula, and Sweet, “May We Have Your Attention Please?” at 143.

94 M. E. Keck and K. Sikkink, Activists beyond Borders: Advocacy Networks in International Politics (Ithaca, NY: Cornell University Press, 1998).

95 Nah et al., “A Research Agenda for the Protection of Human Rights Defenders” at 405–06.

96 Beck, Risk Society, pp. 22, 54.

97 D. Lupton, “Introduction: Risk and Sociocultural Theory,” in D. Lupton (ed.), Risk and Sociocultural Theory: New Directions and Perspectives (Cambridge: Cambridge University Press, 1999), pp. 45.

98 Hankey and Ó Clunaigh, “Rethinking Risk and Security of Human Rights Defenders in the Digital Age” at 542.

99 Amnesty International, Benetech, and The Engine Room, DatNav, p. 8.

101 C. Koettl, “About & FAQ,” Citizen Evidence Lab, 2014, www.citizenevidence.org/about/.

102 “Using Social Networking for Innovative Advocacy,” New Tactics in Human Rights, 2016, www.newtactics.org/conversation/using-social-networking-innovative-advocacy.

103 McPherson, “Digital Human Rights Reporting by Civilian Witnesses,” pp. 197–98.

104 “Security-in-a-Box: Digital Security Tools and Tactics,” https://securityinabox.org/en.

105 Hankey and Ó Clunaigh, “Rethinking Risk and Security of Human Rights Defenders in the Digital Age” at 540.

106 “Tactical Tech’s and Front Line Defenders’ statement on Zone 9 Bloggers,” Tactical Tech, August 15, 2014, https://tacticaltech.org/news/tactical-techs-and-front-line-defenders-statement-zone-9-bloggers.

107 J. Kenway and J. McLeod, “Bourdieu’s Reflexive Sociology and ‘Spaces of Points of View’: Whose Reflexivity, Which Perspective?” (2004) 25(4) British Journal of Sociology of Education 525–44 at 527.

108 Deborah Lupton, Risk, 2nd ed. (London: Routledge, 2013) p. 21.

109 Ulrich Beck, “The digital freedom risk: Too fragile an acknowledgment,” openDemocracy, January 5, 2015, www.opendemocracy.net/can-europe-make-it/ulrich-beck/digital-freedom-risk-too-fragile-acknowledgment.

110 Beck, Risk Society, p. 33.

111 Z. Rahman, “Technology tools in human rights,” The Engine Room, 2016, www.theengineroom.org/wp-content/uploads/2017/01/technology-tools-in-human-rights_high-quality.pdf.

112 M. Loveman, “High-Risk Collective Action: Defending Human Rights in Chile, Uruguay, and Argentina” (1998) 104(2) American Journal of Sociology 477525; S. Madhok and S. M. Rai, “Agency, Injury, and Transgressive Politics in Neoliberal Times” (2012) 37(3) Signs: Journal of Women in Culture and Society 645–69 at 661.

113 Lupton, “Introduction: Risk and Sociocultural Theory,” pp. 4–5.

114 K. E. Pearce and S. Kendzior, “Networked Authoritarianism and Social Media in Azerbaijan” (2012) 62(2) Journal of Communication 283–98.

115 “UK surveillance Tribunal reveals the government spied on Amnesty International,” Amnesty International, July 1, 2015, www.amnesty.org/en/latest/news/2015/07/uk-surveillance-tribunal-reveals-the-government-spied-on-amnesty-international/.

116 Beck, Risk Society, p. 46.

117 Footnote Ibid., pp. 23, 46.

118 Footnote Ibid., pp. 53–55.

119 Thompson, The Media and Modernity, pp. 83–5.

120 Beck, Risk Society, p. 22.

121 McPherson, “Source Credibility as ‘Information Subsidy’” at 333–35.

122 Tufekci, “Algorithmic Harms Beyond Facebook and Google” at 208–09.

123 Bair, “Navigating the Ethics of Citizen Video” at 3; S. Dubberley, E. Griffin, and H. M. Bal, “Making Secondary Trauma a Primary Issue: A Study of Eyewitness Media and Vicarious Trauma on the Digital Frontline” Eyewitness Media Hub, 2015, http://eyewitnessmediahub.com/research/vicarious-trauma; Gregory, “Human Rights Made Visible.”

124 Beck, “The Digital Freedom Risk”; A. Giddens, The Consequences of Modernity (Cambridge: Polity Press, 1990), p. 139.

Save book to Kindle

To save this book to your Kindle, first ensure coreplatform@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×