Skip to main content Accessibility help
×
Hostname: page-component-76fb5796d-x4r87 Total loading time: 0 Render date: 2024-04-25T12:34:49.538Z Has data issue: false hasContentIssue false

Part II - Modalities: How Might Cyber Peace Be Achieved? What Practices and Processes Might Need to Be Followed in Order to Make It a Reality?

Published online by Cambridge University Press:  21 April 2022

Scott J. Shackelford
Affiliation:
Indiana University, Bloomington
Frederick Douzet
Affiliation:
Université Paris 8
Christopher Ankersen
Affiliation:
New York University

Summary

Type
Chapter
Information
Cyber Peace
Charting a Path Toward a Sustainable, Stable, and Secure Cyberspace
, pp. 37 - 128
Publisher: Cambridge University Press
Print publication year: 2022
Creative Commons
Creative Common License - CCCreative Common License - BYCreative Common License - NCCreative Common License - ND
This content is Open Access and distributed under the terms of the Creative Commons Attribution licence CC-BY-NC-ND 4.0 https://creativecommons.org/cclicenses/

3 Information Sharing as a Critical Best Practice for the Sustainability of Cyber Peace

Deborah Housen-Couriel
1 Introduction: Framing the Relationship between Information Sharing and Cyber Peace

The concept of cyber peace brings a much-needed, innovative perspective to discussions of the governance of cyberspace. The ambiguity, conflicting terminology, and lack of transparency with respect to activities by state and nonstate actors have characterized efforts to conceptualize and analyze this new area of human endeavor at least since John Perry Barlow’s Reference Barlow1996 Declaration of the Independence of Cyberspace. Barlow’s (Reference Barlow1996) proclamation that claimed cyberspace as a home for the “civilization of the Mind” and a “global social space” that must be kept free of governments, state sovereignty, and legal constructs – in effect, exempt from any type of governance – marked early on in the life of online activities the challenges and tensions that remain today for the global collective action problem of cyberspace governance. Thus, the distinctive perspective of cyber peace has the potential to set our analytical sights anew and to provide a framework for moving ahead with the normative projects connected to the aspects of cyberspace governance, including the ongoing elucidation of binding rules of international and domestic law that are applicable to cyberspace activities of state and nonstate actors.

Building on previous chapters that treat the concept of cyber peace in depth, the following definition focuses on four specific elements:

Cyber peace is […] not […] the absence of conflict […]. Rather it is the construction of a network of multilevel regimes that promote global, just and sustainable cybersecurity by clarifying the rules of the road for companies and countries alike to help reduce the threats of cyber conflict, crime and espionage to levels comparable to other business and national security risks. To achieve this goal, a new approach to cybersecurity is needed that seeks out best practices from the public and private sectors to build robust, secure systems and couches cybersecurity within the larger debate on internet governance.

(Shackelford, Reference Shackelford2014, pp. xxv–xxvi)

The four elements emphasized in the above definition describe the fundamental connection between the goals of cyber peace and information sharing (IS), the subject of this chapter (Johnson et al., Reference Johnson, Badger, Waltermire, Snyder and Skorupka2016, p. iii).Footnote 1 Clarification of “rules of the road,” whether these are binding or voluntary; threat reduction, risk assessment, and best practices for carrying out these three tasks are precisely the substantive contribution that IS makes to the cybersecurity postures and strategies of stakeholders participating in any given IS platform. As detailed herein, such a platform optimally defines threshold norms of permissible and nonpermissible online behavior on the part of all actors, establishing the criteria for determining whether an individual, private organization, country, group of hackers, or even another autonomously acting computer has violated a rule (Deljoo et al., Reference Deljoo, van Engers, Koning, Gommans and de Laat2018, p. 1508). It also reduces vulnerability to cyber threats by lessening the informational asymmetries that characterize hostile cyber activities to the advantage of the attacker, and contributes to organizational risk assessment by integrating the information shared by other participants in the IS community into heightened “cyber situational awareness” for all sharers. Fourth, IS is readily framed and understood by a multiplicity of actors at the domestic level – private, governmental, and individual – as a best practice and, at the international level, as a confidence-building measure (CBM) for building trust among state and nonstate actors.Footnote 2 These two characterizations of IS in the domestic and international jurisdictional arenas, respectively, are evidenced by the inclusion of IS modalities in many instances of national law and policy, as well as tens of multilateral and bilateral instruments for governing cyberspace at the international level (Housen-Couriel, Reference Housen-Couriel2017, pp. 46–84). Five examples of the latter are the 2015 Shanghai Cooperation Organization’s International Code of Conduct for Information Security, the UN GGE Report of July 2015, the OSCE’s Confidence-Building Measures for Cyberspace of 2016, the EU’s Network and Information Security Directive that entered into force in August 2016; and the 2018 Paris Call for Trust and Security in Cyberspace.

When IS implemented as a voluntary or recommended best practice or CBM in the context of these regulatory arrangements – rather than as a mandated regulatory requirement – it has the advantage of bypassing the legal challenges of achieving formal and substantive multistakeholder agreement on cyber norms. The difficulties of such normative barriers are often observed as characteristic of the contemporary cyber lay of the land. Either as a best practice (at the domestic level) or a CBM at the international level, IS has the advantage of bypassing the present challenges of achieving formal and substantive multistakeholder agreement on cyber norms that are inherent elements of national and multilateral legal regimes for the governance of cyberspace (Macak, Reference Macak, Pissanidis, Rõigas and Veenendaal2016; Ruhl et al., Reference Ruhl, Hollis, Hoffman and Maurer2020).

We propose in this chapter that, as IS platforms provide increasingly relevant, timely, and actionable data on vulnerabilities, including zero-day vulnerabilities (Ablon & Bogart, Reference Ablon and Bogart2017); adversaries’ tactics, techniques, and procedures; malware tool configurations; and other tactical and strategic threat indicators, stakeholders will become more incentivized to increasingly trust IS platforms and to utilize them for both real-time response to hostile cyber activities and for building long-term cybersecurity strategies. Technological advances are easing this process, as platforms adopt new techniques for the automation of alerts and communications among sharers (Wagner et al., Reference Wagner, Mahbub, Palomar and Abdallah2019). Thus, in instances when sharing communities are substantively and technologically optimized for cybersecurity, participants benefit from expertise and insights which may otherwise be unavailable to them with respect to developing threat vectors, mitigation of specific cyber risks, and real-time coordinated responses to hostile cyber events.

Nevertheless, together with this chapter’s assertion that the use of IS constitutes a best practice and a CBM, IS for the mitigation of cyber risk has also been critiqued for drawbacks and disincentives that have caused the current situation of less than optimal utilization of IS platforms. Some of these challenges – posed to stakeholders that refrain from joining IS platforms, and to IS participants who underuse platforms, or use them as free riders – are reviewed in Section 3. Two of the underlying assumptions of the chapter address this challenge of effective incentivization of stakeholders’ use of IS platforms.

The first assumption is that the continued honing of the technological aspects of IS will make platforms more relevant for shareholders: Sharers will increasingly be able to rely upon robust, user-friendly, flexible, and confidential platforms that meet their needs for boosting cybersecurity, especially for coping with real-time cyber events that are in the process of compromising their systems and data. The ongoing relevance and effectiveness of a given IS platform will thus depend upon its incorporation of technology-based best practices for IS, including, inter alia, automated threat identification and sharing, vetting of information reliability, and interoperability with other IS platforms.

The second assumption relates to the value of polycentric governance in cyberspace (Craig & Shackelford, Reference Craig and Shackelford2015). Although no panacea,Footnote 3 the sharing of cyber threat information is optimized for platform participants when it engages a plurality and diversity of actors: governments, private corporations, NGOs, academia, informal groups, epistemic communities, individuals, and even autonomous or semiautonomous computer systems.Footnote 4 Also, optimal IS will include a plurality and diversity of methodologies and measures: real-time information on hostile cyber events, including digital forensics shared by analysts; data on the cyber strategies and policies of private sector organizations, of economic sectors, and of countries; and technical specifications such as those referred to above, evaluations of developing threat vectors, and cyber awareness and training materials. Some of these types of information constitute protected data, the sharing of which impacts substantive legal rights, such as individuals’ rights to personal data privacy, corporate intellectual property, and antitrust guarantees (Chabrow, Reference Chabrow2015; Elkin-Koren, Reference Elkin-Koren1998; Harkins, Reference Harkins2016, pp. 49–63; Shu-yun & Nen-hua, Reference Shu-yun and Neng-hua2007). Analysis of the regulatory protections provided for safeguarding these rights in the context of IS exceeds the scope of the present chapter, and will be treated elsewhere. Support for the position that a polycentric governance model is also advantageous for oversight of such rights protections (Shackelford, Reference Shackelford2016) will be expanded upon below.

Thus, to summarize the points raised in this introductory section, we propose in this chapter to show that, to the extent that IS through trusted platforms incorporates modes of polycentric governance, leveraging a multilevel and multisectoral diversity of actors, methodologies, and measures, cybersecurity is supported and the aims of cyber peace are advanced.

In conclusion, an often observed but challenging aspect of cybersecurity and cyber peace in general should also be highlighted in the present IS context: IS is an ongoing exercise in trust building among sharers (Ostrom, Chang, Pennington & Tarko, Reference Ostrom, Chang, Pennington and Tarko1990; Ostrom et al., Reference Ostrom, Chang, Pennington and Tarko2012). Platform participants must be able to rely upon the security of all communications channels, they must have confidence that the data shared will be utilized only in accordance with agreed rules by all participants, and they must have certainty that any stored or retained data are completely protected and that they remain confidential. By leveraging technological developments and modes of polycentric governance, IS has the potential to embody Alexander Klimburg’s (Reference Klimburg2018, p. 359) observation that “trust is a tangible resource in cyberspace,” hard coded into its basic protocols, into the development of the Internet and, we venture to add – into secure platforms for the sharing of critical information.

The chapter is structured as follows. Section 2 describes the “how” of IS measures by reviewing selected operational aspects of two examples of IS platforms: one a domestic platform and the second a multilateral one for the global financial sector. Section 3 discusses the ways in which IS mitigates cyber vulnerabilities, and includes some critique of the present utilization of IS. Section 4 characterizes the relationship between cyber peace and IS, arguing that IS constitutes a critical building block of sustainable cyber peace governance because of present challenges to binding normative regimes internationally and domestically. Section 5 summarizes the main points and proposes areas for further research that have ramifications for cyber peace IS, including the exploration of IS models with respect to other global collective action problems, such as global health, ensuring global environmental quality, and the elimination of debris in outer space.

2 How Information Sharing Works: Selected Operational Aspects of IS Platforms for “Best Practice” Mitigation of Cyber Risk

This section will describe the practical implementation of IS measures by first defining the concept of IS in the cybersecurity context, then noting the key characteristics of IS platforms, before examining two examples of governmental and private sector exchange of cyber information, one domestic in scope (the US’ Cyber Information Sharing and Collaboration Program [CISCP]); and the other international and sectoral (Global Financial Services Information Sharing and Analysis Center [FS-ISAC]). The concluding section addresses the operationalization of IS as a standardized best practice for bolstering cybersecurity.

2.1 Defining Information Sharing

Information sharing is a measure for interorganizational, intersectoral, and intergovernmental exchange of data that is deemed by sharers to be relevant to the resolution of a collective action problem (Skopik, Settanni, & Fiedler, Reference Skopik, Settanni and Fiedler2016). In the cyber peace context, it is the agreed upon exchange of an array of cybersecurity related information, such as vulnerabilities, risks, threats, and internal security issues (“tactical IS”), as well as best practices, standards, intelligence, incident response planning, and business continuity arrangements (“strategic IS”) (International Standards Organization, 2015). The primary aim of IS in all of these contexts is to reduce information symmetries regarding cyber vulnerabilities at two levels: between hostile cyber actors and their targets and between targeted organizations themselves, none of which has complete situational awareness of the threat environment on their own.Footnote 5

The 2016 Guide to Cyber Threat Information Sharing, published by the US National Institute of Standards and Technology (NIST), describes the advantages of IS measures for improving cybersecurityFootnote 6 as follows:

By exchanging cyber threat information within a sharing community, organizations can leverage the collective knowledge, experience, and capabilities of that sharing community to gain a more complete understanding of the threats the organization may face. Using this knowledge, an organization can make threat-informed decisions regarding defensive capabilities, threat detection techniques, and mitigation strategies. By correlating and analyzing cyber threat information from multiple sources, an organization can also enrich existing information and make it more actionable.

These advantages are gained through the resolution of several key issues which arise in defining the four modalities of IS for any given IS platform:

  • The agreed rules for thresholds of shared threats and events – IS depends upon the prior agreement among participants as to the threshold events which will trigger the need to share information, especially for the real-time sharing of vulnerabilities and hostile cyber events requiring specific defensive actions such as patching vulnerabilities (ideally within an agreed on window of time). This threshold determination is both substantive and technical: It is set in accordance with legal and regulatory requirements of the given jurisdiction, whether domestic or international, and it is triggered by technical indicators based incident response protocols protecting the network.

  • Regulatory issues – Substantive normative and regulatory frameworks constitute an ever-present backdrop for the technological modalities of IS and the determination of IS thresholds. The role of such frameworks in IS, especially the relationship between them and the agreed technical rules for information sharing is critical. They include the aforementioned rights protections (personal data privacy protections, corporate Internet protocol (IP) safeguards, and antitrust guarantees), general international law constraints on hostile cyber activities (Schmitt, Reference Schmitt2017), and bilateral and multilateral treaty provisions (Convention on Cybercrime, 2001). Treatment of these substantive issues are beyond the scope of the present chapter and are noted in the Conclusion for further research.

  • The types of information shared – Each IS platform specifies the typologies of relevant information to be shared by participants, often in a Terms of Use document that is restricted to the participants – an internal code of conduct that may serve to build trust among sharing entities. Legal and regulatory constraints also determine types of information that may be shared, and the conditions for sharing, such as anonymization of protected personal data. One example is the Cybersecurity Information Sharing Act (2015), S. 754, 114th Cong. (2016), which defines in Section 104(c)(1) two types of shareable information that must be restricted to a “cybersecurity purpose”: “cyber threat indicators” and “defensive measures.” As discussed below, current developments are moving toward standardization of relevant threat indicators, IS automatization, and rapidity, toward a commoditization of cyber threat data within communities of trust.

  • The sharing entities – Since effective IS platforms are based on communities of trusted sharers, the identity of the sharing entities should be explicit and transparent to all participants (Gill & Thompson, Reference Gill and Thompson2016; Lin, Hung, & Chen, Reference Lin, Hung and Chen2009; Özalp, Zheng, & Ren, Reference Özalp, Zheng and Ren2014). Moving from the local to the global, sharing of cybersecurity relevant data may take place among individuals (i.e., the MISP and Analyst1 platforms for cyber analysts); within a corporate sector (i.e., the Financial Sector Information and Sharing Analysis Center (FS-ISAC) and Israel’s Cyber and Finance Continuity Center (FC3)); between private sector entities and governmental agencies (as in the UK’s Cyber Security Information Sharing Partnership [CiSP] and the US’ CISCP example below); between one country’s governmental agencies (i.e., the US federal government’s Cyber Threat Intelligence Integration Center); between states, either bilaterally and multilaterally (i.e., the European Union’s CSIRT network as mandated in the Network and Information Systems Directive); and in the framework of international organizations (i.e., NATO’s Computer Incident Response Capability).Footnote 7

Moreover, if the definitional scope of IS broadens to include notifications of irregular activity in cyberspace, then sharers also include individual members of the public who may share reports of suspected cyber fraud and cybercrime with entities such as the FBI and national authorities within the EU, via dedicated websites such as the FBI’s Internet Crime Complaint Center and the national sites listed on the platform of Europol’s Cybercrime Center, “Report Cybercrime Online” (FBI, 2020; Europol, 2020).

The above sampling of sharing entities illustrates the criticality of a polycentric approach to the governance of cyberspace that includes a diversity of actors to address a collective problem. Beyond the modes of IS to bolster cybersecurity among governmental and private companies and organizations reviewed in this Part, current trends in the development of IS include intrasectoral sharing of cyber threat data, integration of artificial intelligence capabilities to improve IS, participation of expert individuals in IS platforms, and the inclusion of the wider public for the purpose of reporting suspicious activity that may constitute a cybercrime, or an indication of a new cyber threat on financial and consumer platforms.

We exclude from the present discussion IS between civilian entities and military or other covert state operators, due to the lack of transparency of most such arrangements (Robinson & Disley, Reference Robinson and Disley2010, p. 9). While there are some examples of military actors sharing cyber threat data publicly, as in the US Cyber Command’s utilization of the VirusTotal platform in September 2019 to share malware samples associated with the North Korean Lazarus Group, such sharing is neither consistent nor transparent, and thus difficult to analyze conclusively (Vavra, Reference Vavra2019). Should such a trend emerge toward IS by military and intelligence stakeholders with the public, in order to help strengthen common cybersecurity postures, it will be an interesting development that would further support the argument in favor of the polycentricity of IS.

In concluding this initial definitional and conceptual discussion of IS, we note that IS must develop in concert with the changing cyber threat landscape in order to retain its relevance and credibility for participants. These developments dovetail with the approach that cyber peace is a dynamic situation, not a static one, and that it also will take into account changing aspects of cyberspace activities.

In the following two sections, we briefly examine two examples of governmental and private sector exchange of cyber information, each incorporating a different model of IS. The first example, the US’ CISCP, constitutes a national platform with both governmental and private sector sharers. The second example, the FS-ISAC, is global in scopeFootnote 8; yet, it has been established by private organizations in the financial sector as a not-for-profit entity. Additional platforms, and some of their characteristics, are noted following these two, as well as a brief summary of commonalities and differences.

2.2 The DHS Cyber Information Sharing and Collaboration Program

The US Department of Homeland Security and Department of Justice provides a dedicated platform for IS between governmental and private sector organizations, the CISCP. Originally established as a platform for the benefit of critical infrastructure operators pursuant to Presidential Decision Directive-63 of May 1998 (as updated in 2003 by Homeland Security Presidential Decision Directive 7), the CISCP is a generic, voluntary, free-of-charge IS platform, open to public and private sector organizations. By incorporating operators of critical infrastructures and other private and governmental organizations into one platform, CISCP aims “to build cybersecurity resiliency and to harden the defenses of the United States and its strategic partners” (CISCP, www.cisa.gov/ciscp). Thus, it is an explicitly domestic IS platform, operating under US legal and regulatory constraints. Prospective participants sign an agreement establishing the modalities of the exchange of anonymized cybersecurity information, thus ensuring protection from legal liability that may ensue from the sharing of protected information such as personal data, information subject to sunshine laws, and some proprietary data. The platform is described as follows:

[CISCP] enables actionable, relevant, and timely unclassified information exchange through trusted public-private partnerships across all critical infrastructure … sectors. CISCP fosters this collaboration by leveraging the depth and breadth of DHS cybersecurity capabilities within a focused operational context … [it] helps partners manage cybersecurity risks and enhances our collective ability to proactively detect, prevent, mitigate, respond to, and recover from cybersecurity incidents.

(Cyber Information Sharing and Collaboration Program, www.cisa.gov/ciscp)

Upon completion of an onboarding training session, participating organizations are provided with of two types of CISCP data, reflecting the abovementioned distinction between strategic and tactical IS. The first is ongoing cyber threat information that is made available to participants through indicator bulletins, analysis reports, and malware reports. Two examples are the Weekly Bulletin, summarizing new vulnerabilities according to NIST’s National Vulnerability Database classification system (U.S. Department of Homeland Security, 2020) and Joint Alerts, such as that issued in early April 2020 on the exploitation of COVID-19 by malicious cyber actors (Cybersecurity and Infrastructure Agency, 2020b).

The second type of IS provided by CISCP is real-time information about emerging hostile cyber events, characterized by actionable data such as technical indicators of compromise and measures to be taken for resolving them (software updates and patches, file hashes, and forensic timelines). One example is the January 2020 alert regarding serious vulnerabilities in Microsoft Windows operating systems, designated CVE 2020-0601 (also, less officially, “Curveball” and “Chain of Fools”) (Wisniewski, Reference Wisniewski2020). The alert warned of a spoofing vulnerability in the way that Windows validates a certain type of encrypted certificate. A hostile actor could exploit this vulnerability through a man-in-the-middle attack, or by using a phishing website (such as an individual user’s bank website) to obtain sensitive financial data or to install malware on a targeted system.

The CISCP shared two types of tactical cybersecurity information with platform participants: A Microsoft Security Advisory addressing the vulnerability by ensuring that the relevant encrypted certificates were completely validated and a National Security Agency advisory providing detection measures for targeted organizations (Cybersecurity and Infrastructure Agency, 2020a). As a result, the Windows vulnerability was quickly identified and addressed by targeted actors. Analysts have noted that IS was especially effective in this incident, resolving a “dangerous zero-day vulnerability” because of the proactive disclosure made by the NSA to Microsoft, and then allowing the vulnerability and patch to be rapidly and simultaneously shared at “machine speed” through the CISCP’s automated indicator sharing capability (Wisniewski, Reference Wisniewski2020). The CVE 2020-0601 event thus exemplifies the importance of leveraging IS among a diversity of sharers – here, governmental and private sector actors – in a transparent manner (Schneier, Reference Schneier2020).

2.3 Financial Services Information and Analysis Center (FS-ISAC)

The second IS platform for analysis is FS-ISAC. Like CISCP, it was established pursuant to Presidential Decision Directive-63; yet, the scope of its activity differs from the CISCP in three important respects: It is restricted to the regulated financial sector; it is explicitly global in its membership and scope; and it requires a fee for participation. Thus, it provides a different model for IS from that of the CISCP and focuses on the sector-specific threat vectors and risks of the vulnerable and frequently targeted global financial sector (World Economic Forum, 2019).

FS-ISAC is the leading global IS platform for this sector, which includes 7,000 members in over 70 jurisdictions. It is constituted as a nonprofit organization with headquarters located in the USA and regional hubs in the UK and Singapore. Member institutions are regulated private-sector financial entities (with some exceptions) and include banks, brokerage and securities firms, credit unions, insurance companies, investment firms, payment processors, and financial trade associations. A separate subplatform was established in July 2018 under the auspices of FS-ISAC for governmental and regulatory entities (Cision, 2018): This CERES platform (CEntral banks, REgulators and Supervisory entities) utilizes separate Operating Rules (www.fsisac.com/fsisac-ceres-operating-rules) and Subscriber Agreements (www.fsisac.com/ceres-forum-subscriber-agreement) for its members.

The FS-ISAC platform focuses on intrasectoral IS: The sharing of government sourced information is independently vetted by the platform’s Analysis Team as it is shared via the DHS’ National Cybersecurity and Communications Integration Center, which provides US federal government cyber advisories. The primary objective is to share “relevant and actionable” information among sectoral participants on an ongoing basis “to ensure the continued public confidence in global financial services” (FS-IAC, www.fsisac.com/). The motivation for members to utilize the FS-ISAC platform includes “[its] access to … best-available information, … trusted consultation with other experts in interpreting the information, the classified working environment” (He, Devine, & Zhuang, Reference He, Devine and Zhuang2018, p. 217), and the opportunity to access all of this on a single, sector-specific dedicated platform. Shared data include sector-specific threat alerts and indicators, intelligence briefings, tabletop exercises, and mitigation strategies. Participants are eligible to participate in seven separate levels of IS, in accordance with graded membership fee levels, which can amount to tens of thousands of dollars annually (Weiss, Reference Weiss2015, pp. 9–10). To increase its global reach and promote cybersecurity within the financial sector, FS-ISAC also provides a no cost, unidirectional crisis alert service for financial institutions which do not opt for paid membership. The FS-ISAC Operating Rules, Subscriber Terms and Conditions, and End User License Agreement are all available to the public on its website, but those organizations accepted for membership are required to sign an additional, and transparent Subscriber Agreement that is forwarded only following an internal authentication process.

The platform itself is operated by a private sector service provider and overseen by a member constituted board. Information may be attributed or shared anonymously by encrypted web-based connections, and alerts are distributed by the FS-ISAC Analysis Team in accordance with one of the five service levels to which the member has subscribed. Members are notified of urgent and crisis situations via the type of communication they designate (electronic paging, email, Crisis Conference call), and are required by the Subscriber Agreement to access the FS-ISAC portal to retrieve relevant information. Due to the highly regulated nature of the financial sector and the high confidentiality of the information it processes, members are explicitly permitted to submit information anonymously. In addition, all data that have not been specifically designated as attributable to the sharer is subject to a two-step process to scrub all references to the submitting company, one automated via process of keyword search and the second a review by the Analysis Team. Incoming information collected by FS-ISAC from members is shared with government and law enforcement agencies only with consent of the sharing member. Concerns around sharing of sector-specific information are governed by an explicit ban on the exchange of commercial information by antitrust and competition provisions in the Rules and the Subscriber Agreement, and by the applicability of all relevant laws and regulations in member countries (FS-ISAC Operating Rules, art. 9). Likewise, members are bound by a confidentiality agreement and requirements with respect to any sharing of protected personal data (FS-ISAC Operating Rules, arts. 11 & 12).

FS-ISAC maintains an all sector, global cybersecurity alert level, the Financial Services Sector Cyber Threat Advisory, and uses the standardized Traffic Light Protocol (TLP) that is also employed by CISCP, as further described below. Recent research shows that FS-ISAC’s use of automated peer-to-peer alerts has decreased the time for generation of cybersecurity compromise indicators by IS participants “from nearly six hours to one minute” (Wendt, Reference Wendt2019a, p. 109), and that “… the automated receipt, enrichment, and triage of [indicators] by the financial institutions were reduced from an average of four hours to three minutes. In total, the automation reduced the average time to produce an IOC, disseminate an IOC, and initiate a response from approximately 10 hours to 4 minutes” (Wendt, Reference Wendt2019b, p. 27).

At present, financial sector entities “actively participate” in peer-to-peer platforms such as FS-ISAC (Wendt, Reference Wendt2019a, p. 115), leveraging automated IS to boost organizational and sectoral cybersecurity. Yet, FS-ISAC and similar sectoral ISACs have come under criticism for the less than optimal participation of members in the platform. Reasons include the platform’s reliance on voluntary sharing by members – and thus, the ease with which an institution can act as a “free rider”; the potentially negative impact of sharing of vulnerabilities and risks on commercial reputation and profitability within the sector; and concerns of substantive legal exposures with respect to protected personal data, corporate IP, and antitrust concerns (Liu, Zafar, & Au, Reference Liu, Zafar and Au2014, p. 1). The perception of vulnerability given by participation in an IS platform may be an additional factor (Wagner et al., Reference Wagner, Mahbub, Palomar and Abdallah2019, at 2.6). Thus, on the one hand, the use of FS-ISAC as a platform for sharing among financial sector participants may be readily adopted, especially given the cost-free option made available for receiving urgent governmental alerts. One the other hand, the incentivization of IS on the part of private sector members is much more challenging. We address this concern in Section 4.

2.4 Operationalizing IS as a Standardized Best Practice for Cybersecurity

Information sharing on cyber threats and vulnerabilities of all types that passes through the CISCP, FS-ISAC, and other IS platforms requires technological measures to safeguard IS at three levels: (1) The rapid provision of data by the sharing organization; (2) its confidential transmission; and (3) its timely processing, distribution, and storage on the IS platform. As we have seen in the above examples, IS platforms leverage standardized, automated formats that enable rapid dissemination and reception of cyber threat indicators (CISA Incident Reporting System, www.us-cert.gov/forms/report; US-CERT DHS Cyber Threat Indicator and Defensive Measure Submission System, www.us-cert.gov/forms/share-indicators). Well-known examples are the STIX and TAXII indicator formatsFootnote 9 that also enable automated information sharing (AIS), Automated Indicator Sharing (AIS), www.us-cert.gov/ais, and the standard TLP, which classifies the security levels of the shared data using four colors in order to indicate the rules for sharing perimeters (see Figure 3.1).Footnote 10

Figure 3.1 Traffic Light Protocol (TLP) definitions and usage, CISA [no date].

There are many examples of national and transnational IS platforms utilizing similar, standardized systems for threat indicator transmission, including NATO (Oudkerk & Wrona, Reference Oudkerk, Wrona, Luiijf and Hartel2013); the EU’s CSIRT network established under the EU NIS Directive (Directive 2016/1148)Footnote 11; the Cyber Threat Alliance (Fortinet, 2017); Israel’s “Showcase” (Chalon Raávah) (Israel Cyber Directorate, 2019) and its FC3 (Housen-Couriel, Reference Housen-Couriel2018; Housen-Couriel, Reference Housen-Couriel2019; Ministry of Finance and the Cyber Directorate, 2017); the CiSP of the UK National Cyber Security Center (National Cyber Security Centre, n.d.); and the “Informationspool” platform supported by Germany’s Department for Information Sharing (Bundesamt für Sicherheit in der Informationstechnik, BSI) through its “cyber alliance” (Allianz für Cyber-Sicherheit) (Alliance for Cyber Security, n.d.).

In addition to these IS platforms that foster IS among governmental, corporate, and some other institutional actors for a broad range of cyber threats and risks, several specialized IS platforms focus on a narrower risk typology that pinpoints cybercrime and terrorist activity on the Internet. Examples include INTERPOL’s Cybercrime and Cyber-terrorism Fusion Centres (INTERPOL, n.d.); EUROPOL’s European Cybercrime Centre (which has been effective in botnet takedown and in the protection of children online) (Europol, n.d.); and the Hash Sharing Consortium established in the framework of the Global Internet Forum to Counter Terrorism (GIFCT) founded in 2016 by Facebook, Google, YouTube, and Twitter to share information on extremist and terrorist content online and containing more than 200,000 such hashes (Global Internet Forum to Counter Terrorism, n.d.).

These and other such IS platforms reflect organizational and regional differences in the modes of gathering and processing cyber threat indicators and other operational data. Yet, they all rely on standardized and vetted processes that promote trust among sharing entities (International Standards Organization, 2015). The developing technical protocols and the informal codes of conduct around their use constitute an important aspect of IS as a best practice for cybersecurity, and contribute to incentivizing it for use by a plurality of sharers.

3 Mitigation of Cyber Threats and Events through Information Sharing: Discussion

Although neither the sole means of closing gaps in cybersecurity, nor by any means a blanket remedy, IS already serves as a key measure for bolstering national, sectoral and, ultimately, global cybersecurity by leveraging and optimizing interdependencies (Europol, 2017). Nevertheless, there is still critique of its present use as a measure for boosting cybersecurity and mitigating risk.Footnote 12 Melissa Hathaway (Reference Hathaway2010) has noted that the considerable quantity of available IS platforms poses a challenge for limited organizational and governmental resources, causing confusion and under commitment (counting fifty-five such government initiated partnerships in the USA alone). Zheng and Lewis (Reference Zheng and Lewis2015, p. 2) emphasize “programmatic, technical and legal challenges” to IS. Lubin (Reference Lubin2019) posits that the increased adoption of cyber insurance policies by private corporations, groups, and individuals may have a chilling effect on IS because “there are often very strict parameters regarding [a policy holder’s] notification and cooperation [regarding hostile cyber events] in the insurance policy.” Finally, the methodologies for evaluating the success of certain IS platforms over others are still developing – as are the definitions of “success” itself in the cyber context (Garrido-Pelaz, González-Manzano, & Pastrana, Reference Garrido-Pelaz, González-Manzano and Pastrana2016, pp. 15–24).

The reasons that organizations may fail to fully adopt and operationalize IS, despite its advantages, may be characterized as either (1) operative or (2) normative-substantive.

The operative disincentives include:

  • The inability to establish trust among sharing entities, some of whom may be competitors, including the concern regarding free riders (entities who benefit from IS without contributing themselves).

  • Costs related to IS including recruitment, training and retention of appropriate cybersecurity personnel and organizational time spent on IS, including time devoted to “false positives” (i.e., incorrect alerts that are based on bad information) (Powell, Reference Powell2005, p. 507).

  • Lack of transparency regarding the robustness and confidentiality of IS platforms, including the possible use of shared data by any participating government agencies for noncybersecurity purposes, such as the tracking of individuals for immigration control or unauthorized surveillance (Johnson et al., Reference Johnson, Badger, Waltermire, Snyder and Skorupka2016, pp. 4–5).

  • Regulatory redundancy, where other, possibly competing, IS formats are mandated and may complicate efficient IS (Knerr, Reference Knerr2017, pp. 550, 553; Robinson, Reference Robinson, Laing, Baadi and Vickers2012).Footnote 13

  • Concern that participation in IS platforms may result in the perception that the sharer is vulnerable to cyber threats (Wagner et al., Reference Wagner, Mahbub, Palomar and Abdallah2019, at 2.6).

Three of the normative-substantive disincentives are:

  • The potential exposure of protected personal data shared by organizations, with resulting regulatory sanctions and exposure to litigation by data subjects and regulators.

  • The potential exposure of organizational IP, with potential chilling effects on organizational innovation, and possible implications for corporate market value.

  • Concerns regarding antitrust implications of IS within a sector.

Taken together, both the operative and substantive-normative disincentives to IS help to explain why some cyberspace actors are reluctant to fully adopt IS as part of their overall cybersecurity strategies on their own initiative; and when they participate, may do so less than optimally (including in situations where required to do so by regulators) (Barford et al., Reference Barford, Dacier, Dietterich, Fredrikson, Giffin, Jajodia, Jajodia, Liu, Swarup and Wang2010, pp. 3–13; Sutton, Reference Sutton2015, pp. 113–116). Nonetheless, despite these potential weaknesses in IS platforms, there is, overall, strong continued support for their inclusion in legal, policy, and standardization initiatives, as shall be shown in the following section. Not only do the potential advantages of increased “cyber situational awareness” outweigh the disincentives but, as argued here, technological developments such as standardized reporting of cyber threat indicators, STIX and TAXII architectures, TLP, and increasingly automated IS (the “commoditization” of cyber threat indicators) signal an increasing awareness of the criticality of IS for the mitigation of cyber risk on the part of all stakeholders.

4 Characterizing the Relationship between Cyber Peace and Information Sharing: A Best Practice and Confidence-Building Measure that Leverages Polycentricity
4.1 Information Sharing as a Best Practice in Support of Cyber Peace

The definition of cyber peace cited at the beginning of this chapter identifies four of its aspects: clarification of “rules of the road” for setting actors’ expectations and thresholds for IS; threat reduction; risk assessment; and best practices for carrying out these three tasks – all of which are supported by IS. Participants in any given IS platform agree ex ante to the thresholds of nonpermissible online behavior of hostile actors, by virtue of the triggers indicating precisely when relevant information should be shared by them and is shared with them. Typical informational asymmetries that have characterized cyber hostilities to the advantage of the attacker are addressed by the sharing of data, such as by those alerts referred to in the above examples of CISCP and FS-ISAC. Risk assessment is carried out, inter alia, on the basis of indicators, data, and situational evaluations received through IS.

Two additional attributes of IS that support sustainable and scalable cyber peace should be noted. First, its neutrality with respect to the typology of both attackers and targets. Whether the attacker is an individual, a country, a group of criminal hackers, an inside operator, or an autonomous or semiautonomous computer – the IS alert thresholds are similar.Footnote 14 Likewise, alerts, vulnerabilities, and warnings are target neutral, and are similarly applicable in the context of state-to-state hostilities, cybercrime, terrorist activity, hacktivism, and money laundering. The second attribute is the convenient scalability of IS, as sharing technologies and protocols currently undergo standardization, automatization, and commoditization.

Work is still needed to quantify the specific advantages that IS brings as a best practice in boosting levels of cybersecurity, especially in terms of its cost effectiveness as part of the overall cybersecurity strategy of organizations and states. This much needed analysis will contribute to a better understanding of the economic aspects of sustainable cyber peace, as well.

4.2 Beyond Best Practice: The Value of Information Sharing as a CBM

Building on this understanding of IS as a best practice, it is argued here that IS further supports sustainable cyber peace as a CBM at the international level, among the states, international organizations, and multinational companies that are critical to ensuring global cybersecurity. The framing of IS as a CBM, rather than as a binding, substantive norm to which these entities are subject as a matter of law or policy, is beneficial to the utilization of IS platforms at the international level (Borghard & Lonergan, Reference Borghard and Lonergan2018). By sidestepping substantive multilateral commitments, IS can be more readily utilized to support cybersecurity and cyber peace. Examples where this has occurred include the UN’s 2015 GGE (United Nations General Assembly, 2015), the OSCE’s 2016 listing of cybersecurity CBMs (Organization for Security and Co-Operation in Europe, 2016), and the 2018 Paris Call for Trust and Security in Cyberspace (Principle 9).

CBMs were originally used in the context of the Cold War to further disarmament processes in the context of the diplomatic and political standoff between the USSR and the West. Nonmilitary CBMs have been defined more generally as “actions or processes undertaken … with the aim of increasing transparency and the level of trust” between parties (Organization for Security and Co-operation in Europe, 2013). They are “one of the key measures in the international community’s toolbox aiming at preventing or reducing the risk of a conflict by eliminating the causes of mistrust, misunderstanding and miscalculation” (Pawlak, Reference Pawlak, Osula and Rõigas2016, p. 133). CBMs are also critical in the global cybersecurity context and have been described as a “key tool in the cyber peacebuilder’s toolkit” (Nicholas, Reference Nicholas2017).

In a 2017 in-depth study of eighty-four multilateral and bilateral initiatives addressing the collective action challenges of cybersecurity, including treaties, codes of conduct, agreements, memoranda and public declarations, IS was found to be included as an agreed cybersecurity measure in more than 25 percent of such initiatives (twenty-one out of the total eighty-four) (Housen-Couriel, Reference Housen-Couriel2017, pp. 51–52). Moreover, the analysis was able to isolate several specific elements of IS, discussed above, that were individually included in this top quarter: IS measures in generalFootnote 15; establishment of a specific national or organizational point of contact for information exchange; and sharing of threat indicators (Housen-Couriel, Reference Housen-Couriel2017, pp. 51–52).Footnote 16 These elements were three out of a list of a dozen CBMs that occur with sufficient frequency to be included in a “convergence of concept” with which diverse stakeholders – states, regional organizations, intergovernmental organizations, specialized UN agencies, standards organizations, private corporations, sectoral organizations, and NGOs – have incorporated into cybersecurity initiatives.Footnote 17 The study concluded that, while such cyberspace stakeholders are frequently willing to incorporate general arrangements for IS (it is in fact the leading agreed-upon cyber CBM in the initiatives that were studied), and even to specify a national or organizational point of contact, they are less willing to commit to a 24/7, real-time exchange of cybersecurity related information (Housen-Couriel, Reference Housen-Couriel2017, p. 67). This finding indicates a gap that should be considered in the context of further leveraging IS in the context of cyber peace.

Nonetheless, as noted above, IS as a CBM holds the advantage of bypassing the present, considerable challenges of achieving formal and substantive multistakeholder agreement on substantive cyber norms, until such time as such binding norms are legally and geopolitically practicable (Efroni & Shany, Reference Efroni and Shany2018; Finnemore & Hollis, Reference Finnemore and Hollis2016; Macak, Reference Macak2017). A few examples of binding domestic law and international regulatory requirements for organizational participation in IS platforms do exist, such as the pan-EU regime established under the EU NIS (Directive 2016/1148), the Estonian Cybersecurity Act of 2016, and the US Department of Defense disclosure obligations for contractors when their networks have been breached. However, there are many more based on voluntary participation, such as the CISCP and FS-ISAC reviewed above, Israel’s FC3, and the global CERT and CSIRT networks of 24/7 platforms for cyber threat monitoring, including the EU network of more than 414 such platforms (European Union Network and Information Security Agency, 2018).

For the purposes of its analysis in this chapter, IS constitutes as a nonbinding CBM that also constitutes a best practice for bolstering cybersecurity and cyber peace, yet does not require a binding legal basis for its implementation. The critical issue of the use of regulatory measures, both binding and voluntary, to promote IS for optimal cybersecurity and cyber peace is, as noted above, an issue for further research.

4.3 Leveraging Polycentricity for Effective IS

In this section, we briefly address the advantages of a polycentric approach for effective IS. Polycentricity is an approach and framework for ordering the actions of a multiplicity and diversity of actors around a collective action problem.Footnote 18 Several scholars in the field of cybersecurity describe and analyze regulatory activity in cyberspace specifically in accordance with such an approach (Craig & Shackelford, Reference Craig and Shackelford2015; Kikuchi & Okubo, Reference Kikuchi and Okubo2020; Shackelford, Reference Shackelford2014, pp. 88–108). Polycentricity explicitly recognizes a multiplicity of sources of regulatory authority and behavioral organization for cyber activities, including nation-state actors, private sector organizations, third sector entities, and even individuals, and it acknowledges the value of employing a diversity of measures to address the collective action problem (Elkin-Koren, Reference Elkin-Koren1998; Shackelford, Reference Shackelford2014; Thiel et al., Reference Thiel, Garrick and Blomquist2019).

A polycentric approach is theoretically and conceptually most appropriate for supporting IS in particular and cybersecurity overall due, inter alia, to its inherent stakeholder inclusiveness, flexibility with regard to types of regulatory measures, and transparency with respect to potential violations of substantive privacy rights, IP protections, and antitrust provisions (Shackelford, Reference Shackelford2014, p. 107). Moreover, in the context of IS, a polycentric approach maximizes the potential for remedying informational asymmetries among a diversity of vetted sharers, bringing to bear a variety of perspectives and capabilities (Kikuchi & Okubu, Reference Kikuchi and Okubo2020, pp. 392–393; Shackelford, Reference Shackelford2013, pp. 1351–1352).Footnote 19 Such an approach explicitly acknowledges the complex interdependencies of all actors in cyberspace (Shackelford, Reference Shackelford2014, pp. 99–100). Thus, a polycentric approach will optimally include on an IS platform the broadest possible range of sharers: Government regulators and agencies themselves; sectoral actors that may share information informally, as they are targeted simultaneously by malicious cyber actors; umbrella groups formed within the sector for formal and informal IS; technical experts, academic and consulting actors, providing external assessments of IS models and their effectiveness; and individuals who may share information through governmental, sectoral, or organizational channels, or through informal channels such as social media – when they experience compromised cybersecurity through their personal Internet use.

The two examples reviewed above are relatively non polycentric at present: CISCP is a public–private sector partnership that includes government agencies and companies in its membership, and FS-ISAC restricts participation even further, to private sector members only (central banks, sector regulators, and other government agencies must join the separate CERES platform). The challenges for building trust on these two platforms are significant and may continue to constitute barriers for inclusion of a broader, more diverse membership. In the context of the financial sector, especially, a more polycentric participation in IS may be encumbered at present by legal and regulatory constraints. Nevertheless, financial institutions already recognize the important potential of gathering data on unusual, detrimental activity in their networks via reporting by customers and suppliers – that is, individual users who access parts of the network regularly and often, and who can serve as sensors for fraudulent and hostile cyber activity such as phishing (Cyber Security Intelligence, 2017). Individual user endpoints and accounts may be among the most vulnerable points of entry into an institution’s network, but they also constitute a key element for cybersecurity data gathering at the perimeter of financial institutions that, we contend, should be leveraged within IS platforms as an additional means of mitigating the informational asymmetry between the hostile actor and the targeted organization. Thus, the provision of fraud prevention alert mechanisms on the websites of banks and some other private companies, by means of which customers may provide information about phishing schemes, irregular activity in their accounts, and other suspicious activity, might be incorporated into sectoral IS platforms.Footnote 20 This growing understanding on the part of financial organizations, social media platforms, and consumer websites that much valuable information with respect to cyber risks may be garnered from individuals (including customers, employees, and suppliers) requires creative thinking around the incentivization of such IS, as well as the protection of individual privacy rights as cyber risk indicators are shared.Footnote 21

In summary, IS is likely be most effective as best practice at the domestic level and as a CBM at the international level – when it is governed by a polycentric approach for the most efficient pooling of resources, knowledge, and experience to mitigate, counter, and respond effectively to cyber threats and events.

5 Summary and Conclusions

This chapter has aimed to show how IS platforms can serve as: arbiters of cyber expertise; the exchange of technical data; real-time coordination of defensive actions; and, perhaps most importantly, the development of trust among key stakeholders in order to mitigate the effects of hostile activities in cyberspace. The analysis has aimed to support the thesis that one of the critical elements to achieving sustainable cyber peace, indeed a sine qua non for its governance, is the timely utilization of credible IS platforms that allow entities targeted by hostile cyber activities to pool information, resources, and insights in order to mitigate cyber risk. Successful platforms will leverage innovative technological developments for collecting actionable cyber threat data at both the tactical, real-time level of incident response, as well as that of strategic planning for amending vulnerabilities and developing long-term defense strategies.

Moreover, even as IS modalities are included in many initiatives for promoting cybersecurity among state and nonstate actors, they have the advantage of bypassing need to achieve formal and substantive multistakeholder agreement on cyber norms that are at the core of international and domestic legal regimes for the governance of cyberspace. At the international level, many contemporary scholars note that the difficulties of surmounting normative barriers await resolution until such time as states and international organizations are prepared to act more transparently in cyberspace and forge binding international and domestic legal regimes. Eventually, in international regimes to which states and organizations formally agree – or, perhaps, more gradually through the evolution of international custom – IS may be transformed from a norm-neutral CBM into an element of states’ and organizations’ due diligence under international cyber law.Footnote 22

Several issues that are beyond the present scope of this chapter invite additional research. Among them are the quantifiable, cost–benefit calculations of IS platforms as an element of cybersecurity and cyber peace; the role of regulation (including substantive legal norms) in promoting and incentivizing IS; the cumulative effects of standardization and automatization on IS processes; and a broader examination of the specific advantages of an explicitly polycentric approach to IS. IS models with respect to other global collective action problems, such as public health (especially relevant in the present COVID-19 pandemic), environmental quality, and the elimination of outer space debris are also salient: A broader, comparative analysis of IS regimes for the mitigation of risk in meeting these common problems may prove fruitful.

We conclude with a note of deep appreciation for the talented and committed women and men who are the ultimate heroes of the story of cyber IS: The security analysts who mine, winnow, and share critical cyber threat indicators as a matter of course, 24 hours a day, 365 days a year, over weekends, during their holiday breaks, and from anywhere they can possibly connect up to cyberspace.

4 De-escalation Pathways and Disruptive Technology Cyber Operations as Off-Ramps to War

Brandon Valeriano and Benjamin Jensen
1 Introduction

The cyber war long promised by pundits has yet to arrive, failing to match the dramatic predictions of destruction many have been awaiting. Despite fears that digital death is on the horizon (Clarke & Knake, Reference Clarke and Knake2014), the international community has seen little evidence. While cyber operations have been used in concert with conventional military strikes from Ukraine (Kostyuk & Zhukov, Reference Kostyuk and Zhukov2019) to operations against the Islamic State (Martelle, Reference Martelle2018), they have focused more on intelligence collection than shaping direct interdiction. Worst-case scenario nuclear-grade cyberattacks (Straub, Reference Straub2019) are unlikely and counterintuitive to the logic of cyber action in the international system (Borghard & Lonergan, Reference Borghard and Lonergan2017) where most operations to date tend to reflect political warfare optimized for digital technology, and deniable operations below the threshold of armed conflict (Jensen, Reference Jensen2017; Valeriano et al., Reference Valeriano, Jensen and Maness2018).

Decades of research in the field of cybersecurity have laid bare two findings so far: (1) We have failed to witness the death and destruction (Rid, Reference Rid2020; Valeriano & Maness, Reference Valeriano and Maness2015) that early prognosticators predicted and (2) digital conflict is typically not a path toward escalation in the international system (Valeriano et al., Reference Valeriano, Jensen and Maness2018). Based on survey experiments, when respondents were put in a situation where they had to respond to a militarized crisis using a wide range of flexible response options, more often than not cyber response options were chosen to de-escalate conflicts (Jensen & Valeriano, Reference Jensen and Valeriano2019a, Reference Jensen and Valeriano2019b).

Beyond their raw potential, emergent capabilities like cyber operations are just one among many factors that shape the course of strategic bargaining (Schneider, Reference Schneider2019). New technologies often lead more to questions of resolve and human psychology than objective power calculations about uncertain weapons. The uncertainty introduced by new strategic options, often called exquisite capabilities and offsets, can push states toward restraint rather than war. While these capabilities can certainly lead to dangerous arms races and future risks (Craig & Valeriano, Reference Craig and Valeriano2016), they tend to play less of an escalatory role in more immediate crisis bargaining. This finding follows work on nuclear coercion in which even nuclear weapons often fail to alter calculations during crises, or have little effect on the overall probability of a crisis (Beardsley & Asal, Reference Beardsley and Asal2009a, Reference Beardsley and Asal2009b; Sechser & Fuhrmann, Reference Sechser and Fuhrmann2017).

How do cyber security scholars explain the evident restraint observed in the cyber domain since its inception (Valeriano & Maness, Reference Valeriano and Maness2015)? Why have the most powerful states, even when confronted with conventional war, avoided cyber operations with physical consequences? Is it fear or uncertainty that drives the strategic calculus away from escalation during cyber conflicts?

In this chapter, we unpack the strategic logic of interactions during a crisis involving cyber capable actors. We outline the limits of coercion with cyber options for nation-states. After proposing a theory of cyber crisis bargaining, we explore evidence for associated propositions from survey experiments linked to crisis simulations, and a case study of the US-Iranian militarized dispute in the summer of 2019.

2 Toward Cyber Peace and Stability

We are now a field in search of a theory, a theory of cyber peace that explains why cyber capabilities and digital technology offer stabilizing paths in the midst of crisis interactions (Valeriano & Maness, Reference Valeriano and Maness2015). When we refer to cyber peace, we do not mean the absence of all conflict or positive peace (Roff, Reference Roff2016), what we have in mind is rather a more measured statement that, while cyber conflicts continue to proliferate, their severity and impact will remain relatively minor (Valeriano & Maness, Reference Valeriano and Maness2015; Valeriano et al., Reference Valeriano, Jensen and Maness2018). This vision of negative peace assumes that violence will continue in the system, but we offer the perspective that during strategic bargaining, cyber options may provide a path toward de-escalation. Cyber operations have the potential to stabilize crisis interactions between rival states. This finding is especially important given that most state-based cyber antagonists are also nuclear armed states (Pytlak & Mitchell, Reference Pytlak, Mitchell, Fris and Ringsmose2016).

On the road to war a state faces many choices regarding the utilization of force and coercion (Schelling, Reference Schelling1960, Reference Schelling1966). Seeking to compel an adversary to back down, a state attempts to display credibility, capability, and resolve (Huth, Reference Huth1999). To avoid outright conflict, a state can dampen the crisis by making moves that avoid conflict spirals. Much akin to the logic of tit-for-tat struggles of reciprocity (Axelrod & Hamilton, Reference Axelrod and Hamilton1981), evidence suggests that actors may choose digital operations to proportionally respond to aggression.

Here we explore the role of cyber operations in producing crisis off-ramps that can stabilize interactions between rival states. That is, during a crisis a state actor is faced with response options to either escalate the conflict, deter further violence, de-escalate the situation, or do nothing. This choice is especially acute during interactions with rivals where tensions are higher. A cyber off-ramp is a strategic choice to either respond in kind, or to de-escalate during a crisis by launching a cyber operation that helps a state set favorable bargaining conditions without losing a significant strategic advantage. By demonstrating weak signals and commitment to the issue at stake, crisis actors can seek to leverage information effects to forestall further escalation.

Cyber operations are not clear paths to peace, but in the context of more dramatic options digital technologies can lead us down a road away from war. During crisis situations, digital technologies can push states away from the brink of escalation by mitigating risks and revealing information to adversaries that helps to manage escalation risks.

3 When Do Crises Escalate?

There is well-established literature on international crises and escalation dynamics, that grew out of the Cold War, which analyzes great power competition as a bargaining process (Schelling, Reference Schelling1958, Reference Schelling2020; Fearon, Reference Fearon1995; Powell, Reference Powell2002). Conflict as a process is the result of a strategic interactions in which participants attempt to gain an advantage short of the costly gamble of war (Fearon, Reference Fearon1995). During a crisis, each side attempts to signal its capabilities and resolve to the other through deploying military forces, conducting a show of force, making credible threats, and leveraging nonmilitary instruments of power like sanctions and diplomatic demarches.

In this delicate dance, most leaders look to preserve their flexibility to manage escalation risks against the probability of achieving their political objectives. Work on international crises and militarized disputes illustrates this posture through a demonstrated preference for reciprocation strategies in which states adopt a proportional response to threats as a means of maximizing their position short of escalation (Axelrod & Hamilton, Reference Axelrod and Hamilton1981; Braithwaite & Lemke, Reference Braithwaite and Lemke2011).

Yet, the uncertainty and pressure of a crisis, along with preexisting factors shaping strategic preferences, can pull statesmen away from prudence to the brink of war. States that are rivals are prone to arms races and place a high premium on gaining an advantage in a crisis increasing the probability of escalation (Vasquez, Reference Vasquez1993; Sample, Reference Sample1997; Valeriano, Reference Valeriano2013). Territorial disputes tend to be particularly intractable and prone to escalation, especially when there is a recurring history of disputes (Vasquez & Henehan, Reference Vasquez and Henehan2010; Toft, Reference Toft2014; Hensel & Mitchell, Reference Hensel and Mitchell2017).

Misperception looms large, causing signals to be misinterpreted (Jervis, Reference Jervis2017). Shifts in military capabilities can trigger different risk appetites as the offense–defense balance shifts (Jervis, Reference Jervis1978). There is an open debate about the extent to which espionage and subterfuge in cyberspace alters the security dilemma (Buchanan, Reference Buchanan2016). Some work argues that cyber is the perfect weapon and will redefine warfare (Kello, Reference Kello2017), while other assessments contend it creates a new stability–instability paradox (Lindsay & Gartzke, Reference Lindsay, Gartzke, Greenhill and Krause2018). Rather than increasing the risk of escalation, cyber operations could act as a crisis management mechanism allowing decision makers to make sharp distinctions between the physical and digital worlds and build active defenses on networks (Libicki, Reference Libicki2012; Jensen & Valeriano, Reference Jensen and Valeriano2019a; Valeriano & Jensen, Reference Valeriano and Jensen2019).

4 The Logic of Cyber Off-Ramps

This chapter helps develop a midrange theory hypothesizing that cyber operations are a possible mechanism for helping states manage crises in a connected world.

First, in crisis settings between rival states cyber operations are best thought of as a coercive capability (Borghard & Lonergan, Reference Borghard and Lonergan2017). In addition to their value in intelligence operations (Rovner, Reference Rovner2019), they allow states to disrupt and degrade rival networks.

As instruments of coercion, cyber operations tend to produce fleeting and limited effects, best characterized as ambiguous signals (Valeriano et al., Reference Valeriano, Jensen and Maness2018). Ambiguous signals are “covert attempts to demonstrate resolve that rely on sinking costs and raising risks to shape rival behavior” (Valeriano et al., Reference Valeriano, Jensen and Maness2018, p. 13). States engage in covert communication, probing each other during a crisis (Carson, Reference Carson2020). The benefit of cyber operations is that they are a weak signal that can be denied, preserving bargaining space while still demonstrating a willingness to act. This makes cyber operations a low cost, low payoff means of responding early in a crisis.

Second, experimental studies show that the public tends to treat cyber operations different than they do other domains. There are also key threshold dynamics associated with cyber operations. In a recent study, Kreps and Schneider (Reference Kreps and Schneider2019) found that “Americans are less likely to support retaliation with force when the scenario involves a cyberattack even when they perceive the magnitude of attacks across domains to be comparable.” For this reason, cyber operations offer a means of responding to a crisis less likely to incur domestic audience costs that could push leaders to escalate beyond their risk threshold.

Avoiding escalation is especially appealing since there are indications that most twenty-first century great powers maintain a public aversion to casualties. Even authoritarian regimes limit reporting and use a mix of private–military companies and proxies to hide the true cost of war from their citizens (Reynolds, Reference Reynolds2019). Given this emerging dynamic, cyber operations offer states a means of responding to a crisis without triggering direct, immediate human costs that can often lead to an emotional, as opposed to a rational, conflict spiral. Cyber operations help states manage thresholds in crisis interactions.

Third, and less explored by the cyber security literature to date, cyber operations are defined by unique substitutability dynamics. To say cyber operations are subject to substitution effects implies that states evaluate the trade-offs inherent in using cyber instruments when signaling another state.

In economics, there is a long history of using marginal analysis (Marshall, Reference Marshall1890; Krugman et al., Reference Krugman, Robin and Olney2008) to evaluate trade-offs in production and consumption. In microeconomics, the marginal rate of substitution is the extent to which a consumer will give up one good or service in exchange for another (Krugman & Wells, Reference Krugman and Wells2008). The two goods or services, even courses of action, can be perfect substitutes, in which case they are interchangeable, or imperfect substitutes – in which case the indifference curve shifts. Furthermore, there is a distinction between within-group and crosscategory substitution in economics and psychological studies of consumer choice (Huh et al., Reference Huh, Vosgerau and Morewedge2016). There is also a long history of work on foreign policy substitutability in international relations (Most & Starr, Reference Most and Starr1983; Starr, Reference Starr2000; Most & Starr, Reference Most and Starr2015). This research maps out when similar acts, as substitutes, trigger different (Palmer & Bhandari, Reference Palmer and Bhandari2000) or similar foreign policy outcomes (Milner & Tingley, Reference Milner and Tingley2011).

Applied to contemporary escalation and foreign policy, contemporary leaders evaluate whether to substitute a cyber effect for a more conventional instrument of power. We propose that there are unique substitutability dynamics involved with selecting cyber operations during strategic bargaining episodes. If cyber operations are not efficient substitutes, then they require an increased number or complements. To the extent that cyber operations are an imperfect substitute, a state would have to use more cyber effects to compel an adversary than, for example, traditional diplomatic demarches or threats of military action. The central question for decision makers thus concerns the ideal typical crosselasticity of demand for cyber operations.

We theorize that cyber operations are subject to certain characteristics that make them weak substitutes, and better thought of as complements. In microeconomics, a complement implies the use of one good or service that requires the use of another complementary good or service. If you use a printer, you are going to need a constant supply of toner and paper. With respect to cyber operations, it means that, as shaping mechanisms, they will tend to be paired with at least one more instrument of power to compensate for their weak substitutability as an ambiguous signal subject to threshold effects. This logic follows earlier findings that states will tend to use cyber operations in conjunction with other instruments of power that include both positive and negative inducements (Valeriano et al., Reference Valeriano, Jensen and Maness2018).

Two additional dynamics alter the elasticity of demand for cyber effects in crisis bargaining. First, the elasticity of demand is skewed by the dual-use dynamic of cyber operations. Cyber operations tend to be a use and lose capability limiting when states will risk employing high-end capabilities (Jensen & Work, Reference Jensen and Work2018). Leaders who have cyber probes spying on adversary systems worry about sacrificing their digital scouts for fleeting attack opportunities, a calculation known in US Joint doctrine as intelligence gain/loss.Footnote 1 They also worry about burning capabilities by exposing their operations. Many cyber capabilities can be both intelligence and tools of subterfuge simultaneously. A tool kit used to access a rival states computer networks and extract information can also be used to deliver malicious code.

Back to the concept of substitution, this dynamic means that states must pay information costs to burn access and deliver their payload. Once you attempt to achieve an effect beyond espionage, one increases the risk that the rival state knows you are accessing their networks. Information costs and the opportunity cost of future intelligence lost to achieve a cyber effect skew elasticity and lowers escalation risks. When a state does employ cyber capabilities to respond to a crisis scenario, they will prefer lower end capabilities to reduce information costs. There are unlikely to employ more exquisite tools to achieve a cyber fait accompli that produces an escalation spiral. More importantly, they will look for specific conditions to use cyber substitutes, such as when a rival state has less cyber capability and thus reduces information costs associated with burning a digital spy.

Second, the elasticity of demand is further skewed by a second category of information cost, the shadow of the future (Axelrod, Reference Axelrod1984; Axelrod & Keohane, Reference Axelrod and Keohane1985). States like the United States have more than one rival, and even when a state has a single rival they expect to interact with them in the future. Therefore, burning a tool or tool kit in the present risks losing that capability relative to either another rival in the present or a target state in the future. This compounds the information costs that skew the indifference curve. As a result, cyber operations will tend to be used as complements, combined with other instruments of power to increase the expected marginal effect. They can be used as substitutes, but only under conditions where states assess a lower likelihood of paying additional information costs associated with the dual-use dimension and shadow of the future. On its own, the extent to which a cyber operation is substitutable could trigger a security dilemma (Herz, Reference Herz1950; Glaser, Reference Glaser1997; Booth & Wheeler, Reference Booth and Wheeler2007).Footnote 2 Yet, the substitution of cyber capabilities occurs in a larger context defined by ambiguous signals and threshold effects that dampen escalation risks. These properties help states escape the security dilemma and view cyberattacks as less escalatory than conventional military operations. In the end, cyber capabilities are weak substitutes and will be used more as complements to manage escalation outside of narrow conditions.

Taken together, the above logic of weak coercive potential, thresholds, and substitution effects produces the following three hypotheses.

  • H1. Cyber operations are not escalation prone.

Observations from cases and survey experiments should demonstrate that when cyber capabilities are present they are not associated with increased escalation. The null hypothesis is that cyber operations are associated with escalation spirals. The hypothesis is better evaluated through large-N methods associated with either past, observed cyber incidents or survey experiments examining escalation preferences when compared actively to the use of other instruments of power. Case studies would show more the process and sequence associated with using cyber operations. One would expect to see cyber instruments used to check escalation as a weak, proportional alternative before crossing into higher thresholds.

  • H2. Cyber operations are more likely to be used as complements when states consider escalating a crisis.

Due of their weak substitutability, cyber operations will tend to complement other instruments of power. There are inherent cross-domain effects associated with modern crisis management (Gartzke & Lindsay, Reference Gartzke and Lindsay2019). When examining survey experiments on crisis decision making involving selecting between cyber and noncyber response options, there should more instances of combining cyber effects with other instruments of power. The null hypothesis would be that there is no relationship between cyber escalation and using multiple instruments of power.

  • H3. Cyber operations are more likely to be used as substitutes for other measures of power when there are no indications of rival cyber activity.

Since cyber operations tend to be weak substitutes, due to information costs and the elasticity of demand, there should be narrow scope conditions that shape when and how they are used in place for more traditional instruments of power. The state will want to minimize the shadow of the future and avoid losing the inherent value of cyber capabilities that are unknown to the adversary. This dynamic implies that in survey experiments one would expect to see a higher percentage use of cyber tools in treatments where there are no indications the adversary is using cyber operations. This initial indication helps respondents gauge the substitutability costs and inherent trade-offs of using cyber capabilities.

5 Hope amongst Fear: Initial Evidence
5.1 Research Design

Demonstrating that cyber operations can serve as crisis off-ramps and represent a common strategic choice to respond proportionally during crisis interactions can be a difficult proposition. The goal is to find evidence, under a controlled setting, when a state will have to make a choice between an option that might cause significant damage, an option that will cause little or no harm, the option of doing nothing, and the ability to wage a cyber operation against the opposition.

We propose two methods to investigate our propositions, a theory-guided case study investigation and a survey experiment using crisis simulations and wargames. Once the plausibility of our propositions is determined, we can follow-up our examinations with further support and evidence through follow on experiments. This is not a simple process and we only begin our undertaking here.

The case study presented here represents a theory-guided investigation according to Levy’s (Reference Levy2008) typology. These case studies are “structured by a well-developed conceptual framework that focuses attention on some theoretically specified aspects of reality and neglects others” (Levy, Reference Levy2008, p. 4). In these cases, we cannot rule out other theoretical propositions for the cause of de-escalation, but can demonstrate the process of how cyber activities provide for off-ramps on the road to conflict.

Such case studies can also serve as plausibility probes. According to Eckstein (Reference Eckstein, Greenstein and Polsby1975, p. 108), plausibility probes “involve attempts to determine whether potential validity may reasonably be considered great enough to warrant the pains and costs of testing.” We can only pinpoint the impact of a cyber operation as a choice and examine the outcome – de-escalation during a case study investigation.

Case studies are useful, but do not provide controlled situations where there are clear options and trade-offs for leadership. It might be that a cyber option was decided before the crisis was triggered, or that a cyber option in retaliation was never presented to the leader. Here, we will use a short case study to tell the story of how a cyber operation was chosen and why it represented a limited strike meant to de-escalate a conflict, but will pair this analysis with an escalation simulation.

Deeper investigations through proper controlled settings can be done through experimental studies. In this case, experimental wargames where a group of actors playing a role must make choices when presented with various options. Our other option is survey experiments to demonstrate the wider generalizability of our findings, but such undertakings are costly and time intensive.

Experiments are increasingly used in political science to evaluate decision making in terms of attitudes and preferences (Hyde, Reference Hyde2015; Sniderman, Reference Sniderman2018). While there are challenges associated with external validity and ensuring that the participants reflect the elites under investigation, experiments offer a rigorous means of evaluating foreign policy decision making (Renshon, Reference Renshon2015; Dunning, Reference Dunning2016). For the experiment below, we employ a basic 2 × 2 factoral design.

5.2 Wargames as Experiments

To date, research on cyber operations have focused either on crucial case studies (Lindsay, Reference Lindsay2013; Slayton, Reference Slayton2017), historical overviews (Healey & Grindal, Reference Healey and Grindal2013; Kaplan, Reference Kaplan2016), and quantitative analysis (Valeriano & Maness, Reference Valeriano and Maness2014; Kostyuk & Zhukov, Reference Kostyuk and Zhukov2019; Kreps & Schneider, Reference Kreps and Schneider2019). Recently, researchers have expanded these techniques to include wargames and simulations analyzed as experiments.

There is a burgeoning literature on the utility of wargames and simulations for academic research. Core perspectives generally define the purpose and utility of wargames, failing to include the wider social science implications of new methodologies defaulting toward the perspective that war-gaming is an art (Perla, Reference Perla1990; Van Creveld, Reference Van Creveld2013). More recently, there has been an increasing amount of research offering a social science perspective on war-gaming as a research methodology (Schneider, Reference Schneider2017; Pauly, Reference Pauly2018; Jensen and Valeriano, Reference Jensen and Valeriano2019a, Reference Jensen and Valeriano2019b). The perspective that wargames can add to our knowledge about crisis bargaining under novel technological settings is one we follow herein (Reddie et al., Reference Reddie, Goldblum, Lakkaraju, Reinhardt, Nacht and Epifanovskaya2018; Lin-Greenberg et al., Reference Lin-Greenberg, Pauly and Schneider2020).

To evaluate the utility of cyber operations in a crisis, the researchers used a conjoint experiment linked to a tabletop exercise recreating national security decision making. Small teams were given packets that resembled briefing materials from US National Security Council (NSC) level deliberations based on guidance from NSC staffers from multiple prior administrations. The packets outlined an emerging crisis between two nucleararmed states: Green and Purple. The graphics and descriptions tried to obscure the crisis from current states, such as China and the United States. The respondents were asked to nominate a response to the crisis, selecting from a range of choices capturing different response options using diplomatic, information, military, and economic instruments of power. Each instrument of power had a scalable threshold of options, from de-escalatory to escalatory. This range acted as a forced Likert scale. Figure 4.1 shows a sample page from the respondent packets outlining the road to crisis and balance of military capabilities.

Figure 4.1 Diagram from Wargame Simulation.

The packets were distributed to a diverse, international sample of 400 respondents in live session interactions. In the terms of the types of respondents who participated, 213 were students in advanced IR/political science classes, indicative of individuals likely to pursue a career in foreign policy, 100 were members of the military with the most common rank being major (midcareer), 40 were members of a government involved with foreign policy decision-making positions, 19 were involved with major international businesses, and 13 opted not to disclose their occupation, while 15 left it blank. Of these respondents there were 267 male respondents, 110 female respondents, and 4 who preferred not to say, while 19 opted to leave it blank.Footnote 3 With respect to citizenship, 295 respondents were US citizens, 87 were non-US citizens, and 4 preferred not to say, while 14 left their response blank.Footnote 4

These participants were randomly assigned to one of four treatment groups:

  • Scenario 1. A state with cyber response options (cyber resp) that thinks the crisis involves rival state cyber effects (cyber trig);

  • Scenario 2. A state with no cyber response options (no cyber resp) that thinks the crisis involves rival state cyber effects (cyber trig);

  • Scenario 3. A state with cyber response options (cyber resp) that thinks the crisis does not involve rival state cyber effects (no cyber trig); and

  • Scenario 4. A state with no cyber response options (no cyber resp) that thinks the crisis does not involve rival state cyber effects.

These treatments allowed the researchers to isolate cyber response options and assumptions about the role of rival state cyber effects in the crisis. These treatment groups are listed in Table 4.1.

Table 4.1 Treatment groups

Treatment

Number

1.

Cyber Response Options (Yes)

Assumed Rival Cyber Activity (Yes)

100

2.

Cyber Response Options (No)

Assumed Rival Cyber Activity (Yes)

100

3.

Cyber Response Options (Yes)

Assumed Rival Cyber Activity (No)

100

4.

Cyber Response Options (No)

Assumed Rival Cyber Activity (No)

100

N = 400.

To measure escalation effects associated with cyber capabilities (H1), the survey experiment examined participant response preferences using the respondent initial preference (RESP) variable. This variable asked the survey respondents to indicate their initial reaction and preferred response to the crisis as de-escalate (1), adopt a proportional response (2), escalate (3), or unknown at this time (4). Coding along these lines allowed the researchers to factor in uncertainty and capture if there were any differences between what the survey respondents wanted to do initially, and what they selected to do after reviewing approved response options across multiple instruments of power. Furthermore, as a 2 × 2 experiment focused on attitudes and preferences, the RESP variable helped the team determine if the four different treatments altered the decision to escalate as a cognitive process, and how each participate viewed their options given limited information in a rivalry context. The results are shown in the contingency table (Table 4.2 and Figure 4.2).

Table 4.2 Contingency results by treatment

Treatments

Total

Cyber Trig Cyber Resp

Cyber Trig No Cyber Resp

No Cyber Trig Cyber Resp

No Cyber Trig No Cyber Resp

RESP

De-escalate

Count

41

44

57

28

170

Expected Count

42.5

42.5

42.5

42.5

170.0

% within RESP

24.1

25.9

33.5

16.5

100.0

% within SCENARIO

41.0

44.0

57.0

28.0

42.5

% of Total

10.3

11.0

14.2

7.0

42.5

Standardized Residual

−.2

.2

**2.2

**−2.2

Proportional

Count

51

46

35

67

199

Expected Count

49.8

49.8

49.8

49.8

199.0

% within RESP

25.6

23.1

17.6

33.7

100.0

% within SCENARIO

51.0

46.0

35.0

67.0

49.8

% of Total

12.8

11.5

8.8

16.8

49.8

Standardized Residual

.2

−.5

**−2.1

**2.4

Escalate

Count

5

3

7

5

20

Expected Count

5.0

5.0

5.0

5.0

20.0

% within RESP

25.0

15.0

35.0

25.0

100.0

% within SCENARIO

5.0

3.0

7.0

5.0

5.0

% of Total

1.3

0.8

1.8

1.3

5.0

Standardized Residual

.0

−.9

.9

.0

Uncertain

Count

3

7

1

0

11

Expected Count

2.8

2.8

2.8

2.8

11.0

% within RESP

27.3

63.6

9.1

0.0

100.0

% within SCENARIO

3.0

7.0

1.0

0.0

2.8

% of Total

0.8

1.8

0.3

0.0

2.8

Standardized Residual

.2

**2.6

–1.1

–1.7

Total

Count

100

100

100

100

400

Expected Count

100.0

100.0

100.0

100.0

400.0

% within RESP

25.0

25.0

25.0

25.0

100.0

% within SCENARIO

100.0

100.0

100.0

100.0

100.0

% of Total

25.0

25.0

25.0

25.0

100.0

X2 = 32.723, p < .000 (two-sided), ** = standardized residual is ±1.966.

Figure 4.2 Response preferences from wargame simulation.

Escalation was generally low with only twenty respondents preferring escalation. When they did opt to escalate, neither the presence of cyber response options nor the adversary use of cyber seemed to affect their response preference. Alternatively, when states had cyber response options and there were no signs of rival state cyber effects, participants opted to de-escalate (57) more than expected (47.5). The results were inverse when states were in a crisis that lacked cyber options and adversary cyber effects (treatment 4). Here there were less observed preferences to de-escalate (28) than expected (42.5) and more instances of proportional responses (67) than expected (49.8). The results also lend themselves to categorical variable tests for association using the phi coefficient (Sheskin, Reference Sheskin2020). The phi coefficient is 0 when there is no association and 1 when there is perfect association. The value is .286 indicating a weak but significant relationship between the treatment group and escalation preferences consistent with the hypothesis. Cyber options were not associated with escalation and were, in fact, linked to preferences for de-escalation.

A second measure of escalation allows the team to differentiate between the RESP and the overall degree of potential escalation based on the instruments of power selected. This measure is less effective since it does not capture the attitude and preference as a cognitive process in line with best practices in experiments, but does allow the researchers to further triangulate their findings. The researchers created a variable odds of escalation (OES) and average odds of escalation (OESAAVG). OES is a summation and adds the escalation scores from across the actual response options selected. OESAAVG is a binary variable coded 1 if the OES score is over the average and 0 if it is under the average (Table 4.3). OESAAVG allows the researchers to look across the treatments and see if there are differences when cyber response options are present and absent.

Table 4.3 Expected count of escalation events

SCENARIO

Total

1

2

3

4

OESAAVG

0

Count

71

52

70

59

252

Expected Count

63.0

63.0

63.0

63.0

252.0

Standardized Residual

1.0

−1.4

.9

−.5

1

Count

29

48

30

41

148

Expected Count

37.0

37.0

37.0

37.0

148.0

Standardized Residual

−1.3

1.8

−1.2

.7

Total

Count

100

100

100

100

400

Expected Count

100.0

100.0

100.0

100.0

400.0

X2 = 10.725, p < .013 (two-sided), ** = standardized residual is ±1.96.

The results cast further doubt on cyber operations as being escalatory. Both treatments 1 and 3 had less combined instruments of power above the average coercive potential (29, 30) than expected (37, 37). Of particular interest, when states had cyber response options and escalated, the magnitude tended to be less with treatment 1 seeing 29 instances of above average coercive potential versus 37 expected (−1.3 standardized residual) and treatment 3 seeing 30 instances versus 37 expected (−1.2 standardized residuals). These contrast with treatment 2 where there is a cyber trigger and no cyber response options available. Here there were 48 instances of above average coercive potential versus 37 expected (1.8 standardized residual). Cyber appears to have a moderating influence on how participants responded to the crisis.

Turning to the second hypothesis, to measure complementary effects associated with the survey experiment, the researchers examined how participants combined instruments of power. Participants were allowed to recommend three response options to the crisis. These response options were organized by instruments of power on the aforementioned Likert scale. Each instrument had six options. In treatments where participants had cyber response options, six additional options were added each with an equivalent level of escalation. This gave participants a total of twelve responses in cyber treatments. Since the packets involved four instruments of power (diplomatic, information, military, economic), participants had a total of 24 response options in noncyber treatments (treatments 2, 4) and 48 in cyber response treatments (1, 3). Participants could choose three response options all in one instrument of power, or spread them across multiple instruments of power. Table 4.4 shows the number of response options selected for each instrument of power across the treatments below. There were no statistically significant differences across the treatments with respect to the distribution of the responses.

Table 4.4 Treatment groups and instrument of power response preferences

Treatment

Diplomatic

Information

Military

Economic

1

80

88

57

53

2

81

84

54

67

3

70

85

77

50

4

71

86

60

62

X2 = 12, p < .213 (two-sided).

In each survey experiment, the researchers used this information to create a variable called COMB (combined) that measured the number of instruments of power a respondent used. This number ranged from one to three. Since the survey experiments asked participants to select three options, they could either select three options from any one instrument of power or employ up to three combined instruments of power. To confirm the second hypothesis, one would need to see a higher than expected instances of combining instruments of power comparing conventional versus cyber escalation preferences.

To evaluate hypothesis two along these lines, the researcher separated treatments 2 and 4 and 1 and 3 to compare escalation preferences and combined instruments of power. In Table 4.5, the conventional escalation column shows how many times respondents used 1, 2, or 3 instruments of power, differentiating between treatments that saw escalation and no escalation.Footnote 5

Table 4.5 Conventional versus cyber escalation

Conventional Escalation

Cyber Escalation

Inst Power

No Escalation

Escalation

No Escalation

Cyber Escalation

1

+0(.5)

+1(.5)

6(6.4)

+1(.6)

2

18(16.8)

15(16.2)

19(23.8)

**7(2.2)

3

84(84.7)

82(81.7)

158(152.8)

9(14.2)

X2 = 1.217, p < .544 (two-sided)

N = 200 (Treatments 2, 4)

X2 = 13.726, p < .005 (two-sided)

N = 200 (Treatments 1, 3)

** = standardized residual > 1.96.

+ = count is less than 5 (cannot evaluate).

Third, to evaluate substitution, the researchers compare percentages. There should be a higher rate of substitution, measured as using a cyber option, in treatment 3 than in treatment 1. In treatment 3, participants have no evidence the rival state is using cyber capabilities thus making them more likely to substitute cyber effects due to the lower, implied information costs. A respondent would look at the situation and see more utility in using cyber because no adversary cyber effects are present. Alternatively, when adversary cyber effects are present, participants will assess higher information costs. They will be more concerned about adversaries being able to mitigate the expected benefit of any cyber response (Table 4.6).

Table 4.6 Coercive potential

Treatment

Escalation

Escalation Involved Cyber

1

35

6 (17.14%)

2

50

NA

3

21

11 (52.38%)

4

48

NA

N = 400.

As predicted, there was more observed substitution in treatment 3, as opposed to treatment 1. In treatment 3, 52.38% of the response options selected (i.e., coercive potential) involved cyber equivalents compared with 17.14% for treatment 1. Because there were no indications of adversary cyber capabilities in this treatment, participants likely perceived a cross-domain advantage, hence less information costs. This alters the hypothetical elasticity of demand making cyber a more perfect substitute. Table 4.7 breaks out the substitution further.

Table 4.7 Coercive potential and cyber substitution

Treatment

Diplomatic

Information

Military

Economic

1

20(3)

10(4)

12(1)

7(1)

3

10(5)

7(5)

14(7)

4(2)

N = 2,000.

In treatment 1, cyber responses were substituted at a higher rate for information effects (40%) than other instruments of power. Three of the four substitutions involved the option to “burn older exploits in adversary systems disrupting their network operations in order to signal escalation risks.”

In treatment 3, cyber responses were heavily used to substitute for conventional responses over 50% of the time. The most common military substitution (4/7) involved opting to “compromise data of individual members of the military to include identify theft, fraud, or direct social media messaging.” This option substituted for the conventional response: “Conduct a public show of force with air and naval assets challenging known defense zones and testing adversary response.” Participates opted for information warfare, or more conventional displays of military force. The most common information substitution remained burning “older exploits in adversary systems disrupting their network operations in order to signal escalation risks.” The most common diplomatic substitution in the packet was “use spear phishing, waterholing, and other methods to expose sensitive political information.” Again, information warfare was a substitute for more conventional forms of coercion when the adversary posture suggests a low probability of response to information operations.

Another factor stands out when looking at the descriptive statistics associated with differentiating conventional and cyber escalation, measured as coercive potential. As seen in Table 4.6, there is a higher observed rate of coercive potential in noncyber response treatments. The available of cyber response options appears to reduce the coercive potential by substituting information warfare for more traditional approaches to coercion.

Overall, we have evidenced that cyber response options can moderate a conflict between rival powers. Respondents generally used cyber options to either respond proportionally or seek to de-escalate the situation until more information can be gathered. What we cannot explain is whether or not the results were influenced by the presence of nuclear weapons on both sides, different regime types, and other possible confounding variables because our sample was not large enough to enable additional treatments.

6 Case Study Probe: The United States and Iran

To further examine the concept of cyber off ramps and contemporary escalation dynamics, we turn to a theory-guided case study examination (Levy, Reference Levy2008). Since survey experiments are prone to external validity challenges (Renshon, Reference Renshon2015), a case analysis helps triangulate the findings from the three hypotheses. To this end, interactions between the United States and Iran in the summer of 2019 offer a viable case for examination (Valeriano & Jensen, Reference Valeriano and Jensen2019). Referring to the prior hypotheses, we argue that cyber operations are not escalation prone (H1). We also note that cyber operations are more likely to be used as complements when states do consider escalating (H2), and that cyber operations are more likely to be used as substitutes when there are no indications of rival cyber activity (H3). We now examine our developing theory’s plausibility in the context of this case.

6.1 Origins

The full picture of what happened between Iran and the United States in the summer of 2019 will continue to develop as classified information is released, but what we do know suggests there was a significant confrontation with cyber operations playing a role as a coercive instrument alongside diplomatic, economic, and military inducements in the dispute. Given that Iran and the United States maintain an enduring rivalry and have a history of using force, even if through proxies, this case was particularly escalation prone. Yet, instead of going to war, Tehran and Washington pulled back from the brink. The key question is why?

As long-term rivals, the United States and Iran have been at loggerheads over the control of the Middle East and resource access for decades (Thompson & Dreyer, Reference Thompson and Dreyer2011). The origins of the contemporary rivalry between Iran and the United States started, from an Iranian perspective, in 1953 when the CIA helped their UK counterparts stage a coup (Kinzer, Reference Kinzer2008). From the US perspective, the rivalry dates to the Iranian Revolution and the overthrow of the Shah in 1979, installed in the 1953 coup (Nasri, Reference Nasri1983). The new regime, led by Ayatollah Ruhollah Khomeini, launched a revisionist series of direct and proxy challenges against US interests in the region (Ramazani, Reference Ramazani1989) that culminated in a protracted conflict with Iraq. During the Iran–Iraq War, the United States backed Iran’s rivals, including Iraq and the larger Gulf Cooperation Council. Iran in turn backed Shiite groups across the Middle East implicated in attacking US forces in the Lebanon.

In the aftermath of the Iranian Revolution and during the subsequent Iran–Iraq War, the United States engaged in limited but direct military engagements with Iran, including the failed Desert One raid to rescue American hostages (1980), and during Operation Earnest Will (1987–1988) in which the US Navy escorted Gulf State oil tankers in a convoy to protect them from Iranian military forces (Wise, Reference Wise2013). This period included multiple naval skirmishes such as Operational Praying Mantis (1988) and Operational Nimble Archer (1988) in which US forces attacked Iranian oil rigs and military forces in retaliation for Iranian mining in the Strait of Hormuz and repeated attacks. Contemporary US perspectives on Iranian motives and likely foreign policy preferences emerged during this period, with the Washington foreign policy establishment seeing Iran as a revisionist, revolutionary state.Footnote 6 Similarly, Iranian attitudes toward the United States hardened even further as Washington labeled the country part of an Axis of Evil (Shay, Reference Shay2017) and invaded its neighbor, Iraq. Iran opted to counter by funding proxy Shiite groups in Iraq and undermining the transitional Iraqi government.Footnote 7

Parallel to its proxy struggle with the United States in Iraq, Tehran sponsored terror groups that attacked US interests across the region and accelerated its nuclear weapons program.Footnote 8 Starting in 2003, the International Atomic Energy Agency started pressuring Iran to declare its enrichment activities, which led to multilateral diplomatic efforts starting in 2004. These efforts culminated in UN Security Council resolutions expanding sanctions on Iran over the subsequent years, and the US joining the multilateral effort (P5+1) in April 2008 following a formal Iranian policy review. Backed by the larger range of diplomatic and economic sanctions that had been in place since the Iranian Revolution, the pressure resulted in the 2015 Joint Comprehensive Plan of Action (JCPOA). This agreement limited Iran’s ability to develop nuclear weapons and included European allies as treaty members distributing the burden of enforcement internationally (Mousavian & Toossi, Reference Mousavian and Toossi2017).

In 2018, the Trump administration withdrew from the agreement, arguing that Iran was still building nuclear weapons and directing proxy warfare against US allies (Fitzpatrick, Reference Fitzpatrick2017). The Trump administration wanted to move past the JCPOA agreement, which had reduced tensions in the region. Instead, the Trump administration ramped up sanctions and designated the Islamic Revolutionary Guard Corps, with the Quds force (Tabatabai, Reference Tabatabai2020), a terrorist organization in 2019 (Wong & Schmitt, Reference Wong and Schmitt2019). The leader of the organization, QasemSoleimani, became a prime target (Lerner, Reference Lerner2020).

6.2 Cyber and Covert Operations

Given Iran’s use of proxies, covert operations generally color the relationship between Iran and United States. These activities included the use of cyber capabilities. The United States and Iran were deep in a cyber rivalry, with twenty cyber conflicts between 2000 and 2016 (Valeriano et al., Reference Valeriano, Jensen and Maness2018). Data on cyber interactions only begin in 2000, making it difficult to catalog the full range of covert and clandestine activity between 1979 and 2000.

With respect to cyber operations, the United States likely initiated seven cyber operations while Iran launched thirteen (Maness et al., Reference Maness, Valeriano and Jensen2019). The most significant event was when the United States and Israel launched the Stuxnet attack, which disabled centrifuges in the Natanz nuclear power plant (Lindsay, Reference Lindsay2013). The overall impact of the attack on the Natanz plant is intensely debated, but assessment at the time suggested a limited overall impact on Iran’s ability to produce nuclear materials (Barzashka, Reference Barzashka2013). It is still unknown what effect the Stuxnet attack had on Iranian internal calculations and assessment of US capabilities.

The pattern between the United States and Iran has often been for the United States to rely on cyber espionage and degrade operations to harm Iranian interests and activities, while Iran generally seeks to avoid direct confrontation in cyberspace (Valeriano & Maness, Reference Valeriano and Maness2015). Saudi Arabia is a frequent proxy cyber target of Iran, given that the United States is seen as its protector and ally. Iran’s actions against the United States mostly entail basic espionage, economic warfare, and the typical probes and feints in cyberspace (Eisenstadt, Reference Eisenstadt2016).

Another key aspect of the covert competition, and the prime threat that Iran offered to the United States, was the use and control of proxy forces in the region. The Iranian Quds force controlled proxy actors in the region (Eisenstadt, Reference Eisenstadt2017), with Houthi forces seeking to attack forces in the region with Scud missiles (Johnston et al., Reference Johnston, Lane, Casey, Williams, Rhoades, Sladden, Vest, Reimer and Haberman2020). The awareness that Hezbollah was taking clear direction from Iran altered the dynamics of the dispute between Israel and its regional rivals (Al-Aloosy, Reference Al-Aloosy2020). Entering the summer of 2019, Iran’s use of proxy forces dominated the concerns of the Trump administration (Simon, Reference Simon2018; Trump, Reference Trump2018).

6.3 The Summer 2019 Crisis

As the summer began in 2019, tensions accelerated due to concerns about Iranian proxy warfare, the use of cyber actions in the region, and the pursuit of nuclear weapons after the end of the JCPOA (see Figure 4.3 for the timeline of events). In addition to increased hacking activities, Iran attacked tankers in the Persian Gulf, with two incidents occurring in May of 2019. At one point, Iranian operatives were seen placing unidentified objects on the hull of a tanker before it was disabled. Iran “called the accusations part of a campaign of American disinformation and ‘warmongering’” (Kirkpatrick et al., Reference Kirkpatrick, Perez-Pena and Reed2019).

Figure 4.3 Iran–United States Case Timeline.

(Source) [no date]

Following intelligence reports that Iran was plotting an attack on US interests in the Middle East on May 5, 2019, National Security Adviser, John Bolton, announced (Bolton, Reference Bolton2019) the deployment of a carrier strike group and bomber task force to the Middle East to “send a clear and unmistakable message to the Iranian regime that any attack on the United States interests or those of our allies will be met with unrelenting force.” In response, on May 12 the crisis escalated with four commercial vessels, including two Saudi Aramco ships, targeted by sabotage attacks attributed to Iran in the Gulf of Aden (Yee, Reference Yee2019). By May 13, the Pentagon announced plans to deploy as many as 120,000 troops in the region in additional fighter squadrons and naval task forces already headed to the region (Schmitt & Barnes, Reference Schmitt and Barnes2019). In response, on May 14 Iranian proxies in Yemen launched a massive attack against Saudi oil infrastructure using a mix of drones and cruise missiles (Hubbard et al., Reference Hubbard, Karasz and Reed2019). By the end of May, the United States implicated Iran proxies in firing rockets at US interests in Iraq and responded with additional troop deployments and weapon sales to Saudi Arabia. These measures added to the range of economic sanctions the Trump administration initiated following its departure from the JCPOA (News, Reference News2018).

The increasingly militarized crisis continued into June. On June 6, 2019, Iranian-backed rebels in Yemen shot down a MQ-9 Reaper, leading the US Central Command (CENTCOM) Commander to warn that US forces faced an imminent threat throughout the region (Kube, Reference Kube2019). On June 13, magnetic mines, likely delivered by Iranian unmanned subsurface vehicles, damaged two additional commercial vessels, leading the United States to announce additional troop deployments.

The downing of a US RQ-4A Global Hawk UAV on June 20, 2019, served notice that conflict was likely to escalate. The United States deemed it an unprovoked attack of an aircraft in international waters. President Trump ordered a military strike on June 20, but halted the operation over fears of mass casualties on the Iranian side, or fears of the impact of a war with Iran on reelection. He stated on Twitter, “We were cocked & loaded to retaliate last night on 3 different sights when I asked, how many will die. 150 people, sir, was the answer from a General. 10 minutes before the strike I stopped it, not proportionate to shooting down an unmanned drone.” (Olorunnipa et al., Reference Olorunnipa, Dawsey, Demirjian and Lamothe2019).

Instead of escalating the conflict, on June 22 the United States leveraged a series of cyber operations to respond proportionally to Iranian provocations. There seems to have been a few distinct operations; it is unclear how many separate teams or tasks were directed against Iran. One operation disabled Iran’s ability to monitor and track ships in the region by attacking their shipping databases (Barnes, Reference Barnes2019b). Another operation by US Cyber Command was said to have disabled Iranian missile sites, making them vulnerable to air attacks (Nakashima, Reference Nakashima2019). In addition, the United States was also likely dumping Iranian code on the site VirusTotal (Vavra, Reference Vavra2019), potentially impairing Iranian’s ability to retaliate by spilling their tools so other defenders were prepared.

The cyber operations served to signal risk to the Iranians and preserve further options to manage the crisis if it was to continue. The proportional response to Iran’s activities possibly allowed for the conflict to stabilize and helped push the two states away from the brink of war. On the road to war, cyber options provide a critical path away from confrontation while still managing to service domestic audience concern

On June 24, cyber security scholar, Bobby Chesney, observed, “Indeed, reading the tea leaves from the past weekend, it appears the cyber option helped ensure there was an off-ramp from a kinetic response that might have led to further escalation.” (Pomerleau & Eversden, Reference Pomerleau and Eversden2019). On June 25, Valeriano and Jensen (Reference Valeriano and Jensen2019) wrote a column in The Washington Post that stated, “contrary to conventional wisdom, cyber options preserve flexibility and provide leaders an off-ramp to war.”

Following a tense summer, the conflict moved into a new phase in late 2019 and 2020 with the killing of an American contractor after a rocket attack on the US base in Iraq on December 27, 2019 (Barnes, Reference Barnes2019a). The United States retaliated with strikes against Iranian proxies, the Hezbollah, in Iraq and Syria. Hezbollah then attacked the American embassy in Iraq, leading to the US president authorizing the assassination of IRGC Commander, Qasem Solemani, on January 3, 2020 (Zraick, Reference Zraick2020). The United States moved to deploy 4,000 addition troops in the region and Iran retaliated by launching missile strikes on US bases in Iraq, wounding over a hundred soldiers (Zaveri, Reference Zaveri2020). The conflict was finally de-escalated, with the United States choosing to not respond to the Iranian attack by claiming that no one had been killed. Since there was six months between the summer and winter 2019/2020 incidents, they are treated as two distinct, albeit linked, crisis cases.

6.4 Assessing the Case

Assessment of the events suggests that the crisis with Iran could have escalated in June 2019 after the downing of the Global Hawk UAV, seen as a significant piece of military hardware costing around $220 million (Newman, Reference Newman2019). Demands for retaliation and escalation were rife in the foreign policy community and within the Trump Administration (Trevithick, Reference Trevithick2019).

Instead of escalation, the United States took a different path, consistent with Hypothesis 1. By responding through cyber actions, the United States did two things. First, it demonstrated commitment and credibility to counter Iranian operations by signaling intent for future operations that could have dramatic consequences on Iranian power in the region. Second, these cyber operations also served as Phase 0 operations meant to shape the environment and set the conditions should the United States want to use additional military options in the future. With Iranian defensive systems compromised, Iran was vulnerable to an American attack that never came, and simultaneously subject to a cyber substitute consistent with Hypothesis 3. Cyber operations served to de-escalate the conflict by vividly illustrating the shadow of the future for continued Iranian harassment in the region.

President Trump also increased targeted sanctions directed at Iran’s leadership and threated further strikes, stating that he did not need Congressional approval due to the existing authorization for military forces in the region to respond to terrorist threats (Crowley, Reference Crowley2020).Footnote 9 These moves are consistent with Hypothesis 2, which suggests that cyber operations are used to complement other forms of power if there is a consideration for escalation.

When challenged by a strike on an American asset in the region, the United States had two options, respond in kind or escalate the conflict. Doing nothing would incur significant audience costs among President Trump’s base of support because it would demonstrate weakness. Escalation would likely provoke retaliation by proxy forces all over the Middle East leading to significant US casualties. War would also harm the President’s reelection chances after promising a reduction in tensions and an end to the wars in the region (Tesler, Reference Tesler2020).

Choosing the option of cyber operations and increased sanctions fits clearly with an off-ramp perspective on crisis bargaining. As Hypothesis 3 argued, cyber operations are likely to be used as substitutes when there are no indications of adversary cyber activity. Here cyber options substituted military options because Iran did not escalate in the cyber domain in response to US cyber moves, and Washington likely judged it had a domain advantage.

Cyber options offered a path out of the conflict through responding in ways that target Iran’s command and control functions directly, demonstrating decreased capacity for Iran to control their battlespace. Of particular interest, some of the cyber operations specifically limited Iran’s ability to retaliate in cyberspace by leaking the malicious code Tehran was likely to use. No other military response options were utilized, although they were considered, after cyber operations were leveraged. Cyber options can serve as off-ramps from the path to war.

7 Conclusion: The Promise and Limit of Cyber Off-Ramps

Based on the observations from experiments and a case study of a US-Iranian crisis in the summer of 2019, we conclude that cyber response options limit the danger of escalation. If used correctly to signal to the opposition to moderate behavior, or as demonstrations of resolve, cyber operations allow states to check the behavior of the opposition with minimal danger of escalation. Cyber options allow a state to express discontent and reshape the balance of information between two opposing parties.

To date, states appear to use cyber options to decrease tensions. This is a counterintuitive finding when many in the discipline suggest that either cyber is inherently escalatory or the nature of conflict has changed. It might be true that conflict has changed, but information operations and cyber operations are generally less escalatory and therefore less dangerous than confronting the opposition with conventional weapons. In other words, the logic of substitution and complements appears to apply to the digital domain. The nature of research suggests that there is less danger in using cyber operations as off-ramps to initial confrontations. We must be clear that we are not suggesting cyber operations as a first strike option. To the contrary, cyber operations likely risk sparking a security dilemma when the target is less capable. Yet, as reactions to initial hostility, cyber options provide a path away from war.

Despite a demonstrated case, as well empirical and experimental evidence suggesting cyber operations are not associated with crisis escalation, there are still limits to these findings. Inequality and the inability of a state to respond to a cyber action with cyber response options increases the dangers of escalation. The behavior and strategic posture of the target can be a critical part of the equation. A history of disputes that create overall tension in a dyad can lead to escalation if the issue is salient enough, even if there are cyber response options (Vasquez, Reference Vasquez1993). Our simulation was constricted to one interaction, meaning that we did not test the conditions for escalation across a series of disputes.

The policy advice that emerges from this research is to integrate cyber options into a “whole of government” response tailored to each contingency. In an extended bargaining situation, cyber responses to initial moves can reveal information and decrease tensions, countering much of the hype and hysteria about digital technology exacerbating conflict. That said, cyber operations must be evaluated in terms of the extent to which they act as a complement or substitute, as well as how they might lead to misperception or undermine global connectivity, given the fact that the networks cyber operations target and rely on are largely owned by the private sector. Misperception is still a risk in the digital domain.

The policy goal should be to adopt moderate cyber operations that seek to shape the environment to avoid escalation risks, even if those risks are generally low. By revealing and gathering information in a bargaining situation, cyber options can help decrease tensions by giving states the space they need to maneuver and seek to end a conflict. Using cyber operations, especially cyber operations meant to critically wound command and control facilities or cause death in an offensive manner early during the precrisis period, would likely lead to escalation.

5 Cyber Peace and Intrastate Armed Conflicts Toward Cyber Peacebuilding?

Jean-Marie Chenou and John K. Bonilla-Aranzales
1 Introduction

South Africa is a renowned case for its remarkable peacebuilding process that followed the transition from the apartheid era in the 1990s, particularly in terms of reconciliation, restorative justice, forgiveness, and healing from a violent past (Borris, Reference Borris2002). However, the reconciliation process is ongoing, as seen in the first five days of September 2019 when some xenophobic, looting, and violent attacks emerged in Johannesburg. This time, the victims of those violent attacks were not black South Africans. Instead, the victims were Nigerians who lived and worked in South Africa (Holmes, Reference Holmes2019). This episode of violence could be impacted by different factors, including social media promotion. This example highlights a common feature of online communication in conflict-torn and postconflict societies in various parts of the world. The digital transformation has blurred the boundaries between cyberspace and “physical” space, creating a continuum between online and offline violence. As such, cyberspace has become a realm for political confrontation. Information and data can both be tools to empower dissidents while also being weapons for users, decision makers, governments, and armed groups (Berman, Felter & Shapiro, Reference Berman, Felter and Shapiro2020; Duncombe, Reference Duncombe2019). In this context, threats of violence are published on webpages and social media platforms to create and exacerbate a climate of fear. Violence targeted at specific minority groups reproduces offline practices of discrimination and hatred (Alexandra, Reference Alexandra2018). Moreover, social media and messaging applications are used to mobilize populations generating large-scale collective actions that have created meaningful changes or call for actions worldwide, such as the cases of the Arab Spring (Salem, Reference Salem2014), the Black Lives Matter movement in the United States (Zeitzoff, Reference Zeitzoff2017), or the feminist movement in Argentina (Chenou and Másmela, Reference Chenou, Chaparro-Martínez and Mora Rubio2019). These dynamics are particularly important in postconflict contexts where new opportunities for truth and reconciliation emerge while conflictual relationship might migrate online.

Many cybersecurity studies focus on state actors and, more specifically, on great powers with strong capacities to conduct cyber operations on a global scale, such as the Stuxnet attack (Valeriano and Maness, Reference Valeriano and Maness2018), or the digital attack on the Ukrainian power grid in 2015 (Deibert Reference Deibert2018). However, the resolution of intrastate conflict dynamics, which are crucial elements undermining the existence of a sustainable, stable, and secure cyberspace, usually goes ignored. The use and impact of Information and Communication Technologies (ICTs) in cyberspace during intrastate conflicts has also drawn much attention due to its impact, expanding the analysis of the media’s role in conflicts. However, cyberspace’s role in peacebuilding has been less studied, despite the Tunis Commitment for the Information Society, adopted by the UN in 2005, which acknowledges the potential of ICTs to promote peace by “assisting post-conflict peacebuilding and reconstruction” (United Nations, 2005). As illustrated by the aforementioned South African riots case, the issue of peacebuilding in cyberspace goes beyond access and safe use of technology. It also includes the regulation of violent content and information. This chapter proposes a dialogue between Internet studies and the analysis of peacebuilding to define the notion of cyber-peacebuilding based on the cases of Colombia and South Africa. Drawing upon the four pillars of cyber peace (Shackelford, Reference Shackelford2020, preface), it identifies the main venues for cyber peacebuilding research. We propose a working definition of cyber peacebuilding as those activities that delegitimize online violence, build capacity within society to peacefully manage online communication, and reduce vulnerability to triggers that may spark online violence. These efforts include, but are not limited to, the prevention of the use of online violence as a conflict reduction strategy. They also seek to address the structural causes of conflict by eliminating online discrimination, detecting possible threats and power abuses, and promoting inclusion and peaceful communication in cyberspace.

This chapter, organized into three parts, contributes to structuring the emerging field of cyber peacebuilding research. It draws a bridge between cyber peace, understood as a global public good, and its implementation at the national level by drawing on the cases of South Africa and Colombia.

It begins by broadening the perspective of cyber peace studies to include intrastate armed conflicts located mostly in the Global South. The second section outlines the challenges posed by intrastate conflicts for global cyber peace and draws upon cybersecurity and conflict resolution literature to define cyber- peacebuilding. The third section focuses on how the four pillars of cyber peace used as a framework in this volume – namely human rights, access and cybersecurity norms, multistakeholder governance, and stability – can help structure cyber peacebuilding research and even inform policymakers with a particular focus on South Africa and Colombia. Finally, the chapter concludes with the relevance of cyber peacebuilding research and draws some examples for further research on the issue.

2 Toward a Comprehensive Cyber Peacebuilding Approach

The use of ICTs both affects the dynamics of violent disputes and helps to generate peacebuilding activities (Puig, Reference Puig2019). The use of these technologies does not follow a deterministic path. Technologies, including social media platforms, provide new ways of communication between parties that could increase harm as well as provide novel forms of cooperation. To better understand their impact, we explore some challenges that intrastate armed conflicts generate in a global scenario, then we discuss the role of cyberspace in internal conflicts. Finally, we propose some ideas about the relevance of cyber peacebuilding based on intrastate conflict resolution scenarios.

Intrastate armed conflicts have emerged as a new complex challenge globally, particularly in the Global South (Pettersson & Öberg, Reference Pettersson and Öberg2020). A substantial increase in intrastate disputes occurred in the post–Cold War period, becoming the most frequent and deadly form of armed conflict in the world (Mason & Mitchell, Reference Mason and Mitchell2016), with devastating consequences at social and psychological levels (Wallensteen, Reference Wallensteen2018). Intrastate armed conflict can be defined as civil wars (Sarkees & Wayman, Reference Sarkees and Wayman2010) or understood as asymmetric conflicts (Berman, Felter & Shapiro, Reference Berman, Felter and Shapiro2020). Intrastate armed conflicts include periods of military hostility between government security forces and members of one or more armed opposition groups within a state lasting ten or more days, without regard to the number of fatalities (Mullenbach, Reference Mullenbach2005). They can be categorized according to the dispute’s issue and the rebels’ goals, such as ideological revolutions, ethnic revolutions, and secessionist revolts. Moreover, they can be characterized by the causes of their occurrences. Internal armed conflicts can be explained by greed, centered on individuals’ desire to maximize their profits; grievance, where conflict occurs as a response to socioeconomic or political injustice; and opportunity, which highlights factors that make it easier to engage in violent mobilizations (Cederman & Vogt, Reference Cederman and Vogt2017).

The role of cyberspace in internal conflicts can be interpreted as a double-edged sword, as it enhances the interaction between users, digital platforms, and governmental agencies across multiple technological devices. However, the tensions concerning its positive or negative use not only depend on the users, who range from ordinary citizens to political leaders, rebels, and extremist groups, among other societal actors – all of whom interact using ICTs. The social and political contexts of its use are relevant because those conditions allow for the presence of new actors that behave with complex rules, which undoubtedly change the dynamics of civil wars and peacebuilding scenarios. In short, cyberspace matters in the development and ending of intrastate conflicts because they have become information centric (Berman et al., Reference Berman, Felter and Shapiro2020; Steinberg, Loyle, & Carugati, in this volume).

Cyberspace capabilities contribute to the creation and tracking of analytical elements concerning the tensions, positions, narratives, and changes in the domestic balance of power of states and non-state actors. It offers the possibility to develop conflict prevention actions, as discussed in Chapter 4. Moreover, cyberspace represents a nurturing ground that allows for the generation and promotion of conflict resolution initiatives. As Ramsbotham, Miall, and Woodhouse (Reference Ramsbotham, Miall and Woodhouse2016, p. 432) argued, the “virtual world of cyberspace is, therefore, contested and conflictual in the same way as the ‘real’ world is, but the challenges are the same in the sense that emancipatory agendas of conflict resolution apply as much to cyber peacemaking as to ‘conventional’ peacemaking.” In short, this digital space represents a hybrid and dynamic environment (Gohdes, Reference Gohdes2018), in which uncertainty and threats emerge, but also where the conflicting parties can create peaceful ways to coexist.

The potential for utilizing cyberspace in peacebuilding activities, particularly to enhance the role of mediators and generate policy change, is a positive example of such technologies (Tellidis & Kappler, Reference Tellidis and Kappler2016; Puig Larrauri & Kahl, Reference Puig Larrauri and Kahl2013). A relatively recent development in cyberspace is the emergence of social media, where users can create content and interact across both micro and macro communities (Kaplan & Haenlein, Reference Kaplan and Haenlein2012). The use of social media has undoubtedly changed how we communicate and relate to our world. Its negative uses have raised new complex concerns about ethical and security issues. The recruitment of extremists (Weimann, Reference Weimann, de Guttry, Capone and Paulussen2016; Walter, Reference Walter2017), the increasing polarization among the minority groups who are most active in discussions about public affairs (Barberá, Reference Barberá, Persily and Tucker2020), and the promotion of hate speech (Mathew et al., Reference Mathew, Dutt, Goyal and Mukherjee2019) are some negative uses that heighten conflict dynamics, not only in cyberspace but also in physical space. However, the use of social media also reduces the costs of information distribution in the framework of violent conflict (Hochwald, Reference Hochwald2013), which could generate new social mobilizations and reduce collective action problems (Margetts et al., Reference Margetts, John, Hale and Yasseri2015). Additionally, social media can generate new data and information about the conflict environment that might forecast new violent actions. Its use is also a critical factor in the promotion of narratives that could establish peaceful engagement using a bottom-up approach and could even help foster polycentric information sharing, as was discussed in Chapter 3.

Given the background, our definition of cyber peacebuilding draws upon different strands of literature. Previous efforts to analyze the role of ICTs in the termination of conflicts include cyber peacekeeping and the ICTs for peace frameworks. Moreover, we subscribe to the positive definition of peace adopted by cyber peace scholars. Finally, our definition of cyber peacebuilding is based on a contemporary conflict resolution approach that echoes critical cybersecurity perspectives.

Along with the diffusion of interactions into cyberspace in conflict-torn and postconflict countries, the role of ICTs in peacekeeping operations, and as tools to promote peace, has been increasingly acknowledged by scholars and intergovernmental organizations. From the use of big data in peacekeeping operations (Karlsrud, Reference Karlsrud, Kremer and Müller2014) to the institutionalization of cyber peacekeeping teams and operations in the United Nations, such as the United Nations’ Digital Blue Helmets (Almutawa, Reference Almutawa2020; Robinson, et al., Reference Robinson, Jones, Janicke and Maglaras2019; Shackelford, Reference Shackelford2020), the literature has broadened to include cyberspace in the analysis of peacekeeping. Cyber peacekeeping is an evolution of an idea that emerged in the 1990s, which posited that ICTs could promote peace. During the process that led to the World Summit on Information Society, the idea of ICTs being used for peace was further developed and included in the Tunis Commitment for the Information Society (United Nations, 2005). However, the use of the concept remained limited in scholarly publications with some exceptions (see Laouris, Reference Laouris2004; Spillane, Reference Spillane2015; Young & Young, Reference Young and Young2016) and declined with the massification of social media and the subsequent debate on its role in polarization. While the ICTs for peace scholarship generally focus on access and the infrastructure layer from a techno-optimistic perspective, an analysis of the content layer, and the particular role of social media in conflict and peace dynamics, is a starting point to develop novel inquiries.

Another source of inspiration for cyber peacebuilding is the ongoing effort to promote a positive definition of cyber peace in a scholarly debate primarily dominated by the issue of cyberwar. More specifically, we situate cyber peacebuilding within “the construction of a network of multilevel regimes that promote global, just, and sustainable cybersecurity by clarifying the rules of the road for companies and countries alike to help reduce the threats of cyber conflict, crime, and espionage to levels comparable to other business and national security risks.” (Shackelford, Reference Shackelford2019, p. 163). In this chapter, we propose an analysis of a cyber peacebuilding approach, which mainly focuses on the national level in postconflict contexts, but also includes the participation of local and international actors. Moreover, the analysis of conflictual contexts and peacebuilding in the digital era can help explore new ways to address the increasing polarization at work in mature democracies.

Finally, cyber peacebuilding adopts a human-centered approach and promotes an emancipatory normative stance on the provision of cybersecurity (Collins, Reference Collins, Salminen, Zojer and Hossain2020). Within this context, cyber peacebuilding is a reformulation and an extension of the definition of peacebuilding adapted to the digital age. Drawing upon the definition of peacebuilding proposed by the Alliance for Peacebuilding (2012), we define cyber peacebuilding as an active concept that captures those activities that delegitimize online violence, build capacity within society to peacefully manage online communication, and reduce vulnerability to triggers that may spark online violence. Some activities involve, but are not limited to, preventing the use of online violence as a conflict strategy and highlighting the role of users, states, and Big Tech companies in this regard. They also seek to address the structural causes of conflict by eliminating online discrimination; enhancing the scope and impact in the territory of peacebuilding mechanisms; and promoting inclusion and peaceful communication in cyberspace. As such, cyber peacebuilding efforts represent an essential stepping stone in the pursuit of cyber peace as a global public good.

Such a focus on cyber peacebuilding is not entirely new (see, e.g., Puig Larrauri & Kahl, Reference Puig Larrauri and Kahl2013; Tellidis & Kapler, Reference Tellidis and Kappler2016; AlDajani & Muhsen, Reference AlDajani2020), even though its expression rarely appears as such. This chapter argues that it can be a useful concept for establishing and structuring a scholarly dialogue that explores the multiple dimensions of peacebuilding in cyberspace beyond a liberal approach, which is often limited to the establishment of liberal institutions – democracy, human rights, open economy, and the rule of law (Zaum, Reference Zaum2012). Here, we adopt a comprehensive approach that includes all activities focused on preventing the causes of violent conflict and strengthen mechanisms to handle conflict in a constructive and nonviolent way (Parlevliet, Reference Parlevliet2017).

Cyber peacebuilding represents a contribution to global cyber peace from a polycentric approach. Beyond an exclusively top-down perspective on the necessity of global agreements and norms for building a peaceful and stable cyberspace, we adopt a polycentric approach in order to address how local threats to peacebuilding efforts undermine the existence of cyber peace at a global level (for a similar perspective, see Chapter 2). From this perspective, the proliferation of internal armed conflicts requires the construction of peaceful cyber contexts in conflict-torn and postconflict societies.

To further explore the prospects of cyber peacebuilding, we focus on two cases: South Africa and Colombia. With the victory of the African National Congress in the 1994 election, South Africa started a process of transition from the apartheid era, which notably entailed a new constitution and the establishment of a Truth and Reconciliation Commission in 1996. Despite important achievements, the reconciliation process is still ongoing (du Toit, Reference du Toit2017). On the other hand, Colombia has taken a number of major steps toward the termination of a five-decade-long internal conflict. One of the most important was the peace accord of 2016 between the government and the Revolutionary Armed Forces of Colombia (FARC) guerrilla organization. While the two countries are situated at different sites on the conflict/postconflict continuum, they both face the challenges of peacebuilding and reconciliation (Rodríguez-Gómez et al., Reference Rodríguez-Gómez, Foulds and Sayed2016). Moreover, they are both middle-income and relatively highly digitized countries in the Global South (Chouci and Clark, Reference Choucri and Clark2018, p. 163). Also, in both cases, governmental stakeholders have ignored the relevance of cyberspace for the development of peacebuilding actions. Thus, they represent two diverse and interesting cases in which to explore the prospects of cyber peacebuilding.

3 The Four Pillars of Cyber Peacebuilding

The broad definition of cyber peacebuilding outlined in the previous section encompasses many issues and actors. The four pillars of cyber peace (Shackelford, Reference Shackelford2020) provide a framework to structure the analysis. Local threats to cyber peace and cyber peacebuilding efforts can be categorized within the pillars of cyber peace: access and cybersecurity, human rights, multistakeholder governance, and stability (see Figure 5.1).

Figure 5.1 The contributions of the four pillars of cyber peace to cyber peacebuilding.

(source: elaborated by the authors [September 21, 2020])
3.1 Human Rights, a Call of Action to Update the Social Contract

The promotion of human rights and peacebuilding mechanisms can be analyzed as joint processes in which peacebuilding insights and methods can advance human rights promotion and protection (Parlevliet, Reference Parlevliet2017). However, some overlapping tensions must be considered, such as the complicated relationship between freedom of expression and political stability, and the disputes concerning how to handle sensitive issues such hate speech, sexual harassment, and politically driven attacks that foment collective violent responses.

While freedom of expression, privacy, and data protection are covered by International Humanitarian Law and Human Rights Law at the international level (Franklin, Reference Franklin, Wagner, Kettemann and Vieth2019; Lubin, Reference Lubin, Kolb, Gaggioli and Kilibarda2020), inadequate enforcement mechanisms and profound social issues at the national level, such as a lack of digital literacy and limited Internet access, undermine their implementation (Shackelford, Reference Shackelford2019). This difficult adoption of international regulations complicates peacebuilding scenarios because governments regulate freedom of expression to impose an official truth, which sometimes limits the right of expression and association of the opposition sectors. Moreover, in peacebuilding scenarios, some voices, even the official ones, can become radicalized, creating new challenges to stability. In this context, governments can be tempted to prioritize security and stability over freedom of expression and a pluralist dialogue toward peacebuilding.

For example, there is no Internet detailed legal framework in Colombia that guarantees its citizens’ fundamental rights in cyberspace. Nevertheless, freedom of speech is viewed comprehensively by the Constitutional Court. It is also backed by Colombia’s membership of the Inter-American Human Rights System, which means that this right applies, not only offline, but also in the online world (Dejusticia, Fundación Karisma and Privacy International, Reference Dejusticia2017). However, the respect of those human rights in cyberspace is often challenged due to the use of a securitization narrative by the current governing party that was an opponent to the peace negotiations with the FARC rebels. The government perceives peacebuilding as a mere process of disarmament, demobilization, and reintegration of former combatants in order to restore stability. This limited view of the peacebuilding process also justifies the use of online state surveillance actions to guarantee national security, in which political leaders, former government officials, journalists, and humanrights activists are targeted because of their support of the peace agreement (Vyas, Reference Vyas2020). Without a doubt, the respect of human rights in Colombia, through cyberspace interactions, represents a new challenge that has been ignored by policymakers in the reconstruction of the social fabric in this transitional society.

Second, there is a tension in cyberspace on how to handle sensitive issues that could evolve into violent conflicts. In this complex scenario, Big Tech companies play a critical role because they are able to track and censor what people post and share. However, in the Global South, this tension is not a priority (Schia, Reference Schia2018). On the contrary, Big Tech companies are more concerned with access and digitalization than privacy rights. Many social media companies that operate in developing countries do not have clear policies regarding this issue. Instead, their roles in these societies have been linked to increasing disinformation, inciting violence, and decreasing trust in the media and democratic institutions (Bradshaw & Howard, Reference Bradshaw and Howard2019).

The case of South Africa provides an interesting perspective on the respect of human rights in cyberspace as part of a reconciliation process. Their constitution guarantees the right to freedom of opinion and expression. This topic is mainly addressed under the supervision of the South African Human Rights Commission (SAHRC), which was created by the South African Constitution and the Human Rights Commission Act of 1994. Its aim is linked to promoting human rights through a variety of actions about education and raising community awareness; making recommendations to the Parliament; reviewing legislation; and, most importantly, investigating alleged violations of fundamental rights and assisting those affected to secure redress (Sarkin, Reference Sarkin1998). Based on its mandate, this institution had provided significant recommendations in the legislation linked to topics data protection (SAHRC, 2012) and recent cybersecurity issues (SAHRC, 2017). Nevertheless, its main challenge is to address issues concerning hate speech and racism in cyberspace, particularly on social media platforms, in a quick and efficient way. This commission acknowledges the issue, and it has taken some steps to face this challenge recognizing the allegations of racism perpetrated on social media (SAHRC, 2016). Most importantly, it started a multistakeholder dialogue to reach a detailed social media charter, including human rights education at all academic levels, to fight racism in the digital sphere (SAHRC, 2019).

In conclusion, in order to address human rights issues in cyberspace, particularly in peacebuilding scenarios, there is a need for a new social contract that recognizes human rights as digital rights. Human rights are considered a crucial element of peacebuilding, which must include cyberspace activities. To provide an impact on the development of peacebuilding mechanisms, some human rights standards, values, and principles must be included. To accomplish that end, some actions concerning public policies regarding security and privacy ought to be addressed by governments without exceeding their power. Big Tech companies must provide stricter and more straightforward privacy protocols and conduct codes in layperson’s terms based on the local framework in which they operate. Moreover, civil society’s role, particularly that of users, must be present to delimitate the scope of the potential legal actions concerning topics linked to privacy rights, freedom of speech, misinformation, and disinformation. This inclusive approach would help to create a healthy environment for the exchange of ideas and information, enabling all members who coexist in a changing society to respect and resolve their differences, even in the context of intrastate armed conflict and peacebuilding scenarios.

3.2 Multistakeholder Cyber Peacebuilding

Multistakeholder governance has become a gold standard in Internet governance and regulations of human activities in cyberspace (Scholte, Reference Scholte2020). While not exempt from criticisms in terms of legitimacy and efficiency, the cooperation between public and private actors has become necessary to handle increasingly large amounts of data and regulate private algorithms and infrastructure, leading to a hybridization of governance (Chenou & Radu, Reference Chenou and Radu2019).

This hybridization of governance has also transformed the approach to cybersecurity. Cybersecurity, understood as a national security issue, has historically curtailed the space for multistakeholder governance (Dunn Cavelty, Reference Dunn Cavelty2013; Kuehn, Reference Kuehn, Radu, Chenou and Weber2014). However, recent developments in the production and governance of cybersecurity showcase different governance structures beyond the hierarchical state-led governance of cybersecurity (Kuerbis & Badiei, Reference Kuerbis and Badiei2017; Mueller, Reference Mueller2017; Shires, Reference Shires2018; Tanczer et al., Reference Tanczer, Brass and Carr2018). A multi-stakeholder governance of cybersecurity is emerging at the global, national, and local levels (Pernice, Reference Pernice2018). According to Pernice, the shared responsibility in the establishment of cybersecurity and cyber peace requires a:

[…] multilevel and multi-stakeholder system of cybersecurity governance, a system that includes all stakeholders: the individual citizen and civil society, business enterprises, and public authorities, from the local up to the global level.

(Pernice, Reference Pernice2018, p. 122)

The participation of different sectors in cybersecurity governance is even more important in postconflict contexts, where peacebuilding efforts also require the inclusion of multiple stakeholders (Brzoska et al., Reference Brzoska, Ehrhart and Narten2011; Narten, Reference Narten, Brzoska, Ehrhart and Narten2011). Beyond public authorities, three types of actors are of particular importance. First, the private sector plays an essential role in peacebuilding efforts, both during the negotiations and in the implementation of peace agreements (Rettberg, Reference Rettberg2007, Reference Rettberg2016; Miklian & Schouten, Reference Miklian and Schouten2019). Second, the media can promote peace and the prevention of incitement to violence (Howard, Reference Howard2002; Himelfarb & Chabalowski, Reference Himelfarb and Chabalowski2008). Finally, civil society fulfils different functions in peacebuilding, such as: the protection of citizens; the monitoring of human rights violations and the implementation of peace agreements; advocacy for peace and human rights; socialization to values of peace and democracy; intergroup social cohesion; facilitation of dialogue; and service delivery to create entry points for the other functions (Paffenholz, Reference Paffenholz2010).

Despite some common requirements and goals, multistakeholder cybersecurity governance and multistakeholder peacebuilding are rarely treated together in practice. For example, South Africa has been one of the pioneering countries and a model of multistakeholder peacebuilding with the establishment of an infrastructure for peace. The 1991 National Peace Accord created Regional and Local Peace Committees that were open to any relevant civil society organization, such as religious organizations, trade unions, business and industry representatives, and traditional authorities (Odendaal, Reference Odendaal2010). This multistakeholder infrastructure for peace became a reference for further processes (Preventive Action Working Group, 2015). In 1994, South Africa created the National Economic Development and Labour Council in order to allow for multistakeholder participation in the formulation of economic and social policies. However, multistakeholder participation in the governance of cyberspace is limited in South Africa (Mlonzi, Reference Mlonzi2017). For example, the National Cybersecurity Policy Framework was drafted under the leadership of the South African Department of Communications between 2009 and 2012, but was later transferred to the Ministry of State Security (Global Partners Digital, 2013). As the responsibility of a civilian Ministry, cybersecurity fell under the category of economic and social policy and was thus, open to multistakeholder participation. However, the leadership of the Ministry of State Security limited the scope of cybersecurity and undermined the participation of diverse stakeholders.

In Colombia, multistakeholder participation became institutionalized in economic and social policies through the Consejo Nacional de Política Económica y Social (National Council of Economic and Social Policy). There is a strong participation of diverse stakeholders in the formulation of Internet governance policies organized around the Mesa Colombiana de Gobernanza de Internet (Colombian Internet Governance Forum). Moreover, the recent peace accord acknowledges that “participation and dialogue between different sectors of society contribute to building trust and promoting a culture of tolerance, respect and coexistence” (República de Colombia, 2016, Introducción, translated by the authors). However, the issue of peacebuilding is hardly included in Internet governance debates that tend to reproduce global discussions. On the other hand, the governance of cyberspace is not among the priorities of peacebuilding efforts beyond the question of access (see the section below).

Multistakeholder cyber peacebuilding represents a step further in the implementation of multistakeholder participation. It requires a multistakeholder dialogue between actors involved in the regulation of cyberspace and the diverse sectors that share a responsibility in peacebuilding activities. The cases of South Africa and Colombia illustrate the necessary participation of social media platforms and search engines in peacebuilding efforts. As the corporate social responsibility of digital platforms in campaigns and elections is being discussed in consolidated democracies, the role of digital platforms in postconflict societies to promote peace and limit incitement to violence must be put on the agenda. Likewise, the mass media’s responsibility in the promotion of a culture of peace is now shared with new media and social media (Stauffacher et al., Reference Stauffacher, Weekes, Gasser, Maclay and Best2011; Comninos, Reference Comninos2013). As noted by Majcin (Reference Majcin2018), modern peace agreements should include the regulation of social media content that may disrupt the peace and promote the resurgence of violence. These rules could even be institutionalized in the form of special commissions to review content on social media and take action when viral publications undermine peacebuilding.

In sum, multistakeholder governance of cyber peacebuilding entails not only the adoption of national cybersecurity policies that allow for the participation and representation of all stakeholders in postconflict societies it also requires the adoption of multistakeholder mechanisms directly aimed at the promotion of peace and the prevention of violence in cyberspace with the participation of the private sector, digital platforms, academia, and civil society organizations.

3.3 Redefining Stability in Cyberspace

To understand the role of stability in cyberspace, we adopt a nuanced definition of stabilization by drawing upon conflict resolution literature to explain how the tensions generated in cyberspace can affect the dynamics and the conclusion of intrastate conflicts and the development of peacebuilding activities.

There are many approaches to the concept of stability to address armed conflicts. They include issues related to statebuilding (Hoddie & Hartzell Reference Hoddie and Hartzell2005), international interventions (Belloni & Moro, Reference Belloni and Moro2019), and negotiated peace settlements (Hartzell et al., Reference Hartzell, Hoddie and Rothchild2001), among other approaches. From the UN Security Council’s vision, stability refers to a desired state of affairs, almost as a synonym of “peace” (Kerttunen & Tikk, Reference Kerttunen, Tikk, Tikk and Kerttunen2020). Additionally, this concept has a robust state-centric approach (Carter, Reference Carter2013). To analyze cyberspace’s effect in the ending of intrastate conflicts and peacebuilding scenarios, the dynamic definition proposed by Mielke, Mutschler, and Meininghaus (Reference Mielke, Mutschler and Meininghaus2020) is more useful. They argue that stability is an open-ended and transformative process which accepts changes in social dynamics to keep its forces in equilibrium by constant reconcilement of interests. In a nutshell, the state’s role is crucial to address normative rules, but nonstate actors also play a critical role in achieving long-term stability.

Considering that cyberspace is a very dynamic place, stabilization efforts can lead to the transition from intrastate conflicts toward the restoration of the social fabric through peacebuilding actions. This nuanced approach of stability is crucial to understand issues in conflict resolution scenarios, such as the role of spoilers in cyberspace.

Spoilers can be understood as “key individuals and parties to the armed conflict who use violence or other means to shape or destroy the peace process and in doing so jeopardize the peace efforts” (Nilsson & Söderberg, 2011, p. 624; see also Stedman, Reference Stedman1997). This definition serves to understand the impact of those actors in cyberspace that affect the termination of intrastate conflicts. Digital spoilers are those political actors with relevant influence upon users in cyberspace that exploit their influence to promote violence and spoiling behavior to affect the attempts to achieve peace. They differ from Internet trolls, defined as “unknown online users that create and claim intentionally upsetting statements to enhance strong emotional responses posting offensive or unkind things on the Internet using tactics of disinformation and propaganda” (Petykó, Reference Petykó and Warf2018). Digital spoilers are conflicting parties or leaders who use trolling activities, such as the promotion of disinformation and propaganda to affect the achievement of conflict resolution scenarios.

One example of digital spoilers can be found in Colombia, where the opponents of the peace agreement promoted strong and negatively charged hashtags on social media concerning the endorsement of the peace process with the FARC guerilla organization in October 2016 (Nigam et al., Reference Nigam, Dambanemuya, Joshi and Chawla2017). The promotion of these messages, among other factors, affected the perception of the peace negotiations, which was reflected in the rejection of the peace plebiscite by a small margin. The management of spoilers is a daunting task because influential social media platforms users can foment emotions and hostile attitudes against the peacebuilding process. However, these digital spoilers can be tackled when they violate internal regulations of social media platforms (BBC News Mundo, 2019), which highlights the relevance of multistakeholder Internet governance at the national level.

Another relevant example can be found in South Africa, in which political figures use the rhetoric of hate speech toward different communities in order to gain political support (Akhalbey, Reference Akhalbey2019; Meyer, Reference Meyer2019). The SAHRC has, in the past, analyzed and sanctioned some cases concerning the use of social media to promote hate speech (Geldenhuys and Kelly-Louw, Reference Geldenhuys and Kelly-Louw2020). However, it seems that its mandate does not cover those digital spoilers who express their thoughts in an offensive and disturbing way, pushing the limits of the right to freedom of speech. Their social media statements address critical issues that the peacebuilding process did not solve, such as land reform or race relations, suggesting unpeaceful actions to solve those issues. Additionally, to address the damage that these digital spoilers could make in cyberspace, social media platforms have a key role to play in order to tackle hurtful messages. In this particular case, it seems that there is a misconnection between the conception of the legal rights of freedom of expression provided by the SAHRC and the rules established by social media platforms (Nkanjeni, Reference Nkanjeni2019), which represents a new institutional challenge to address.

In sum, within the framework of cyberspace, stability must be analyzed dynamically. The handling of information plays a critical role because it reflects an age-old tension concerning the relationship between citizens and governments. In that sense, Big Tech companies have become referees and players in a complicated situation. On the one hand, they need to guarantee information and data protection to ensure their legitimacy. On the other hand, they must also respect governmental authority, whose interests are linked to employing surveillance, gathering data, and performing intelligence through controlled information. Amid intrastate armed conflicts and peacebuilding scenarios, the scope of government surveillance could be enhanced, intensifying asymmetric responses. On the other hand, there are more real threats concerning political motivations to spoil conflict resolution scenarios than the risk of cyberspace’s misuse of information beyond the cybersecurity framework. Against this background, the concept of digital spoilers is useful to analyze the behavior of actors whose role could substantially affect the dynamics of stability and conflict resolution efforts. This dynamic approach of stability could lead to the fertile ground to develop cyber peacebuilding actions.

3.4 Inclusion and Human-Centered Cybersecurity

Universal Internet access is an enabling condition for cyber peace. It was identified as the first of the five principles for cyber peace by the ITU (International Telecommunication Union, 2011). According to the ITU, providing access to telecommunication technologies is part of the responsibilities of states, which was later translated into the (debated) idea of Internet access as a human right (Tully, Reference Tully2014). However, the relationship between Internet access and cyber peacebuilding is not direct. Access to the Internet is a necessary, though insufficient, condition to building peace that spans offline and online spaces.

Contrary to the late twentieth century’s techno-optimistic visions, the “old” concept of the digital divide remains relevant today (van Dijk, Reference van Dijk2020). While early accounts of the digital divide focused on physical access and the divide among countries, contemporary analysis of the digital divide insists on the quality of access and the importance of the gap between Internet access within the same country. This dimension is of utmost importance for cyber peacebuilding (Wilson & Wilson, Reference Wilson and Wilson2009). Those communities that do not have access to the Internet are generally communities that have been historically marginalized (Tewathia et al., Reference Tewathia, Kamath and Ilavarasan2020). The digital divide also presents a gender dimension that undermines women’s participation in peacebuilding (Njeru, Reference Njeru2009). Moreover, since telecommunication infrastructures are targets and battlegrounds during conflicts, violence-affected regions are likely to suffer from inadequate or unstable connectivity (Onuoha, Reference Onuoha2013; Adeleke, Reference Adeleke2020). Furthermore, the national digital divide certainly undermines states’ capacities and presence on peripheral territories and, subsequently, their legitimacy (Krampe, Reference Krampe2016). This lack of presence and the complicated access to increasingly digitized public services reinforces the perceived abandonment by the states among marginalized communities.

Both South Africa and Colombia have reached significant rates of access at the national level as a result of economic development and ambitious policies. While just over 50 percent of the world population had access to the Internet at the end of 2019 (International Telecommunication Union, 2020), access rates in South Africa were around 65 percent (DANE, 2020; STATSSA, 2020). However, national digital divides are still important in both countries. For example, over 74 percent of the Gauteng province around Johannesburg and Pretoria benefit from Internet access, compared to just over 46 percent in the poorer province Limpopo, that also has the smallest white South African population in the country (Media Monitoring et al., 2019, p. 12). In Colombia, less than 10 percent of the inhabitants in 700 out of the 1,123 municipalities have Internet access (Quintero & Solano, Reference Quintero and Solano2020). These municipalities are located in geographically remote areas that are also the most affected by the internal conflict.

The bridging of the digital divide is related primarily to the telecommunication infrastructure. Another key element is the use of Internet access by individuals and grassroots organizations to participate in the process of peacebuilding through early warnings, grassroots reporting and monitoring, and data collection “from below.” Internet access is necessary to engage in political activities, including peacebuilding (Puig Larrauri & Kahl, Reference Puig Larrauri and Kahl2013; Shandler et al., Reference Shandler, Gross and Canetti2019).

While access is a necessary feature to build the conditions for civil society to participate, it is not sufficient to secure meaningful participation. Another crucial condition for cyber peacebuilding is the construction of a cyberspace that is safe for everyone. A broad and emancipatory definition of cybersecurity goes beyond the preservation and defense of critical national infrastructure. It focuses on the general population, both users and nonusers, to build a postconflict cyberspace that is safe for everyone, including former fighters, victims, women, and marginalized communities. However, cybersecurity policies tend to be framed as a response to conflict. For example, research shows that cybersecurity capacity is greater in countries engaged in civil war. However, this capacity seems to aim to crack down on domestic dissent rather than provide secure cyberspace at the national level (Calderaro & Craig, Reference Calderaro and Craig2020). Even in postconflict contexts, the original state-centered and militarized approach tends to prevail, despite the evolving conditions. As we have seen, the South African National Cybersecurity Policy Framework was first drafted by the Department of Communications. It was later transferred to the Ministry of State Security and finally adopted in 2015 (State Security Agency, 2015). While it briefly mentions “hate speech” and “fundamental rights of South African citizens” (State Security Agency, 2015, pp. 5, 14), the bulk of the document focuses on national security and on the fight against cybercrime. In the same vein, Colombia adopted a Digital Security policy in 2016 that was drafted during the negotiations between the government and the FARC guerrilla organization (CONPES, 2016). However, the document does not mention the postconflict context. It is largely inspired by the OECD discussions on the management of digital risks and thus, focuses on the necessary conditions for the development of trust in Colombian digital markets. On the other hand, the peace accord only mentions ICTs as a way to access public information and public services such as health and education, without acknowledging their role in the peacebuilding process (República de Colombia, 2016).

Contrary to these examples, the institutionalization of cyber peacebuilding should rely on more comprehensive cybersecurity policies that do not reproduce the patterns of great cyber powers to focus on peacebuilding needs in postconflict societies, such as digital literacy and the regulation of hate speech.

4 Conclusions and Policy Implications

South Africa shows us that reconciliation is possible, even in cyberspace. After the violent attacks in Johannesburg mentioned in the chapter introduction, citizens started to promote hashtags and social media campaigns, such as #SayNoToXenophobia, to call for unity, and looking for an end to the violence in this mature peacebuilding scenario (Levitt, Reference Levitt2019). This example also shows us that while cyberspace has undoubtedly affected the dimensions, approaches, and complex dynamics of intrastate conflicts, it can also promote peacebuilding activities to enhance conflict resolution scenarios.

Colombia provides some examples of how transitional justice contributes to cyber peace in terms of Internet access and human rights. Victims and governmental agencies jointly construct the idea of restorative justice through the use of ICTs and digital tools (Chenou, Chaparro-Martínez, & Mora Rubio, Reference Chenou, Chaparro-Martínez and Mora Rubio2019). Moreover, this relationship is tested in times of crisis; for example, during the COVID-19 pandemic, where digital tools allow for the continuation of transitional justice (Alfredo Acosta & Zia, Reference Alfredo Acosta and Zia2020). Under certain conditions, the adoption of ICTs by transitional justice tribunals might enhance the efficiency and efficacy of the distribution of justice, allowing both parties to save time by reducing mobilization costs and unnecessary formalities to the minimum. In terms of truth and reconciliation, evidence can be found in the creation of an online news portal that looks to contribute to the reconstruction, preservation, and dissemination of the historical and judicial truth about the Colombian conflict, adopting a bottom-up and in-depth journalism perspective (Verdad Abierta, 2020).

South Africa also provides different examples of cyber peacebuilding. In terms of peaceful social mobilization using ICTs, the use of mobile phones improves organization efficiency, access to information, and strengthens the collective identity of social movements; for example, among members of the Western Cape Anti-Eviction Campaign in 2001 (Chiumbu, Reference Chiumbu2012). Moreover, in 2015, South African university students protested around the #FeesMustFall hashtag, to demand relevant changes in their education system, such as the decolonization of curricula and a significant increase in government funding for universities (Cini, Reference Cini2019). But most importantly, with the use of the hashtag #RhodesMustFall, young South Africans provided some analytical elements about how social media could be the way to collectively question the normative memory production to turn the page away from the apartheid era (Bosch, Reference Bosch2017). Despite the criticisms that could be addressed to the SAHRC for the inconsistent sanctioning of hate speech by political leaders, its contribution to the legislative initiatives concerning data protection, and cybersecurity, respectively, is remarkable (SAHRC, 2012, 2017).

In sum, several contributions to the development of peacebuilding activities are fostered by the linkage between the activities of conflict resolution in cyberspace and in the physical world. This chapter proposed a working definition of cyber peacebuilding in order to provide a broad perspective that reflects changes in the way cyberspace is perceived during interstate armed conflicts and afterwards. ICTs are not only tools, they also constitute and enable the interactions that comprise the lifeblood of cyberspace, transforming the political dynamics of conflict and peacebuilding. Hence, this approach responds to the necessity to implement peacebuilding efforts both in the physical space and in cyberspace. The construction of a stable and lasting peace after intrastate conflicts requires delegitimizing online violence, capacity building within society toward peaceful online communication, and a reduction of the vulnerability to digital spoilers. The structural causes of conflict must also be addressed by eliminating online discrimination and by promoting inclusion and peaceful communication in cyberspace.

The focus on peacebuilding scenarios points to one of the major sources of instability, both online and offline for many countries in the world. While cybersecurity studies tend to focus on state actors that have important capacities, a human-centered perspective on cybersecurity and cyber peace must address the digital dimension of intrastate conflicts as is discussed further in the essays section by the Cyberpeace Institute.

Most intrastate conflicts take place in the Global South. As the majority of Internet users are now located in the Global South, the combination of ICTs and intrastate conflicts is undermining the efforts toward global cyber peace. However, cyberthreats in the Global South are less visible than in the Global North. The focus on commercial threats and on powerful countries obscures the prevalence of cyberthreats against civil society and in the Global South (Maschmeyer et al., Reference Maschmeyer, Deibert and Lindsay2020). We argue that the concept of cyber peacebuilding sheds light on the relationship between intrastate conflict and global cyber peace and thus contributes to raising awareness about cyberthreats in the Global South.

The four pillars of cyber peace provide a framework to outline comprehensive cyber peacebuilding efforts. As illustrated by Figure 5.1, they highlight the importance of existing human rights and the necessity to create new norms for the digital age. The pillar of multistakeholder governance sheds light on the role of the private sector, and especially of digital platforms and Big Tech companies, along with civil society, to complement and monitor efforts by states and intergovernmental organizations. Stability in postconflict cyberspace can be implemented through the promotion and preservation of a free flow of information and through the identification and management of digital spoilers that undermine the establishment of peace. Finally, the pillar of access and cybersecurity is particularly important in conflict-prone societies where exclusion and marginalization fuel violence. Moreover, cybersecurity must be understood beyond the implementation by the state of a public policy aimed at the protection of national infrastructure and at the management of digital risk. A human-centered approach is necessary in order to build a cyberspace that is safe for everyone.

This preliminary overview of the different dimensions of cyber peacebuilding in the Colombian and South African cases paves the way for further research on the centrality of cyberspace in the termination of contemporary intrastate conflicts, and for the construction of a stable and lasting peace at a global level. Moreover, it identifies venues for political action. States and international organizations must design new norms of human rights for the digital age along with comprehensive and human-centered cybersecurity policies. Capacity building can empower civil society, foster a safe use of technology, and promote peaceful communication and a culture of peace in cyberspace. Finally, the necessary role of digital platforms must be addressed in order to achieve a meaningful participation and a partnership with states and intergovernmental organizations to tackle online violence.

6 Artificial Intelligence in Cyber Peace

Tabrez Y. Ebrahim
1 Introduction

This chapter examines artificial intelligence (AI, i.e., or mathematical models for representing computer problems and algorithms for finding solutions to these problems) and its impacts on an arms race (i.e., each nation is focused on self-interest in seeking an incremental gain over another for technological superiority of weapons) (Craig & Valeriano, Reference Craig and Valeriano2016, p. 142). In the absence of cooperation, all nations are worse off than if they would be if they cooperated in some form. This chapter overviews how AI’s unique technological characteristics – including speed, scale, automation, and anonymity – could promote an arms race toward cyber singularity (i.e., a hypothetical point where AI achieves Artificial General Intelligence (AGI), that surpasses human intelligence to become uncontrollable and irreversible) (Newman, Reference Newman2019, p. 8; Priyadarshini & Cotton, Reference Priyadarshini and Cotton2020). AI technological advancements have generated a good deal of attention about the AI arms race and its potential for producing revolutionary military applications. While the AI arms race has raised implications for cyber peace, a less studied issue is the potential impact on AGI development in cybersecurity, or cyber singularity. While there is some hype and a development time period toward cyber singularity, the results are generally viewed as negative or, at worst, destabilizing or even catastrophic for cyber peace.

Notwithstanding such limitations, there is still huge potential for the use of technological advancements in AI for civilian, consumer-focused applications, and for the inevitable advancements in nations’ military and security technologies. Economic competition for AI has already motivated its development and implementation by the private sector. This has contributed to the imbalance of the economic dominance by industrialized countries. Innovative companies and countries that focus on AI development may begin to monopolize AI knowledge and take the lead toward cyber singularity, which could thwart cyber peace. AI has also become an essential component of cybersecurity, as it has become a tool used by both attackers and defenders alike (Roff, Reference Roff2017). In the future, the more advanced form of AGI, or super technological intelligence, could develop its own understanding of the world and react to it in a rapid and uncontrollable way without human involvement. Advancement toward cyber singularity could present new military capabilities, such as manipulation of data and overcoming other nations’ defenses, and transform interactions in cyber conflict. While is difficult to detect or measure the origination or proliferation of AI in cybersecurity, whatever possible cooperation among nations that can be promoted is certainly worth exploring. Thus, this chapter explores how shared governance through talent mobilization in the form of a global AI service corps can offset the negative impact of nation-states’ economic competition to develop AGI.

2 Background and Characterization of AI

The definition of AI varies in context and is a moving target as technology continues to advance (Lemley & Case, Reference Lemley and Case2020, p. 1). The term AI is meant to refer to computer programs that perform mathematically oriented tasks that were generally assumed to require human intelligence (Lefkowitz, Reference Lefkowitz2019). AI can take a variety of forms including logical inference (a form of deduction) and statistical inference (of form of induction or prediction) (Eldred, Reference Eldred2019). Such mathematical techniques are becoming more powerful because of the availability and use of large datasets, easy access to powerful and inexpensive computing resources, and the ability to run new algorithms and solve complex problems using massive parallel computing resources (Firth-Butterfield & Chae, Reference Firth-Butterfield and Chae2018, p. 5; Daly, Reference Daly2019). Another way to look at the current state of AI is that it has become cheaper and easier to utilize its techniques with more speed, scale, and automation than ever before. Moreover, the platforms of collecting, using, and solving relationships in data can be done anonymously, which presents opportunities for exploitation of consumers in business and nations in cyber conflict.

Technological advancements have always played a crucial role in the context of conflict and peace (Roff, Reference Roff2016, p. 15). The introduction of information technology presented opportunities to create, move, and process data in ways never seen before, leaving nations with the power to control, defend, secure, and weaponize data. AI performs these tasks better, faster, and with more anonymity than humans, and outperforms ordinary computers and networked systems.

The information technology sophistication of AI allows for disguised and stealth measures, provides for more effective and contextualized threats, and has the potential for amplified human cognitive capabilities in the form of cyber singularity over time (Priyadarshini & Cotton, Reference Priyadarshini and Cotton2020). Many characteristics of information technology – including its ability to involve multiple actors, attribute challenges, and proliferate across borders – present unprecedented challenges for AI in cyber peace (Geers, Reference Geers2011, p. 94). Information technology warfare and protection measures in the modern day present unique considerations for AI compared to prior means and methods. In this vein, AI-based information technologies related to cyber peace fall into three primary classifications: (1) information attacks; (2) information anonymity; and (3) information attribution (Reuter, Reference Reuter2020, p. 16, 24–5, 113–14, 117, 279–81). A new classification of manipulation or change by AI, which is increasingly becoming ubiquitous, presents new opportunities for the integration of multiple stakeholder input.

With AI, nations can analyze patterns and learn from them to conduct cyberattacks (i.e., offensive capabilities of AI) and also use these patterns prevent cyberattacks (i.e., defensive capabilities of AI) in more advanced mechanisms than current capabilities. The state of the art AI already allows for discovering of hidden patterns in data and automating and scaling mathematical techniques with data to make predictions (Coglianese & Lehr, Reference Coglianese and Lehr2018, pp. 14–15).

The path toward AGI is especially attractive insofar as it will not seem to require human intervention and will control the information infrastructure in cyber conflicts (Burton & Soare, Reference Burton and Soare2019, pp. 5–6). As the tools, techniques, and software become increasingly intelligent, AI will have greater role in cyber conflict and cyber peace. To assess this path toward AGI and its implications for shared governance, an overview of information security technology and AI’s role in information security is necessary as a preliminary matter.

2.1 Information Security Overview

The stakes in our national information security debate are high. Information security refers to the hybrid scientific and legal inquiry into defending against all possible third-party attackers and the legal consequences that arise when they cannot. The purpose of information security is to develop and provide technological solutions to prevent the potential for cyberattacks and to minimize the interstate insecurity caused by information technologies (Libicki, Reference Libicki2009, pp. 12–13). Information security technologies have a crucial impact on AI’s role in cyber peace, and therefore, it is necessary to have a proper understanding of what these concepts mean and how they may accelerate or decelerate concerns for a path toward a sustainable and secure cyber peace.

Information security is a capricious concept with varying definitions in the legal and policy realms, but it has a more concrete meaning in computer science and technological realms (Reuter, Reference Reuter2020, pp. 17–18). In a technological sense, the cyber world of computerized networks where information technologies are relevant have three layers: (1) a physical layer of infrastructure (including integrated circuits, processors, storage devices, and optical fibers); (2) a software logic layer (including computer programs and stored information that is subject to processing); and (3) a data layer, for which a machine contains and creates information (Tabansky, Reference Tabansky2011, p. 77). In order to analyze the relevance of information technology, particularly AI, and its role in cyber peace, it is necessary to understand how these concepts relate to technology characteristics. While conflicts among nations can be carried out in different domains, such as land, sea, air, and space, conflict with the use of information technology infrastructure has the following peculiar characteristics for security implications: (1) many actors can be involved; (2) the identity of the security threat may be unknown due to the challenge of attribution; (3) international proliferation; and (4) its dual-use nature that can be exploited in a variety of ways (Reuter, Reference Reuter2020, pp. 12–13). These characteristics are accounted for in the various defensive and offensive uses of information technology, as subsequently shown.

2.2 Defensive Information Security Measures

Defensive protection measures allow for proactive ways to detect and obtain information regarding cyberattacks or intrusion (Chesney, Reference Chesney2020, p. 3). Defending against cyberattackers entails the use of software tools that obfuscate or obscure cyberattackers’ efforts (Andress & Winterfeld, Reference Andress and Winterfeld2011, p. 113). A major goal of defensive cyber protection is to prevent critical infrastructure damage which would generate large spillover effects in the wider economy. The defensive cyber protection approach seeks to: (i) minimize unauthorized access, disruption, manipulation, and damage to computers and (ii) mitigate the harm when such malicious activity occurs to computers. In so doing, information security seeks to preserve the confidentiality, integrity, and availability of information (Tabansky, Reference Tabansky2011, p. 81).

Approaches fall into two general categories: proactive measures (also known as preventative techniques, which can block efforts to reach a vulnerable system via firewalls, access controls, and cryptographic protection) and deterrence measures (that increases the effort needed by an adversary, and includes many types of security controls) (Ledner et al., Reference Ledner, Werner, Martini, Czosseck and Geers2009, pp. 6–7, 9–10). In either approach, the goal is to prevent unauthorized access to a computer system by the use of technological methods to identify an unauthorized intrusion, locate the source of the problem, assess the damage, prevent the spread of the damage, and reconstruct damaged data and computers (Reuter, Reference Reuter2020, pp. 22, 280–283). Deterrence, mitigation, and preventative strikes with the use of information technology include application security, attack detection and prevention, authorization and access control, authentication and identification, logging, data backup, network security, and secure mobile gateways.

2.3 Offensive Information Security Measures

While defensive measures and technology can deter and mitigate the consequences of unauthorized access of computers and networks, limiting unauthorized access may not achieve cyber policy goals. Offensive measures, which are considered lawful but unauthorized, refer to penetrating or interfering with another system and can include mechanisms that allow for impersonation of trusted users and faster attacks with more effective consequences (Dixon & Eagan, Reference Dixon and Eagan2019). Such offensive measures are one of many ways that nations can utilize cyber power to destroy or disable an adversary’s infrastructure (Voo et al., Reference Voo, Hemani, Jones, DeSombre, Cassidy and Schwarzenbach2020). Nations seek to achieve cybersecurity in order to bend the other side’s will or to manage the limiting the scope of the other side’s efforts, and can do so, via deliberate provocation or through escalation via offensive measures or cyberattacks (Jensen, Reference Jensen2009, pp. 1536–1538). A common mechanism for cyberattacks is a computer network attack, wherein actions are taken through the use of information technology and computer networks to disrupt, deny, degrade, or destroy information on computers and networks, and can electronically render useless systems and infrastructures (Andress & Winterfeld, Reference Andress and Winterfeld2011, pp. 110–113).

The increasing power of computers, proliferation of data, and advancements in software for AI capabilities presents many new applications of offensive measures. To demonstrate that AI is a rapidly growing field with potentially significant implications for cyber peace, several technological examples are provided to show the direct or indirect impact of such technological advancement on the need for shared governance of a global service AI corps.

Attack means and methods include malware, ransomware, social engineering, advanced persistent threats, spam, botnets, distributed denial of service, drive-by-exploits and exploit kits, identity theft, and side channel attacks. Such cyberattacks include the intrusion of the digital device with some sort of malware that initiates the communication between the attacking computing and the intruded device. The reasons for initiating such offensive measures include preventing authorized users from accessing a computer or information service (termed a denial-of-service attack) destroying computer-controlled machinery, or destroying or altering critical data and, in doing so, can affect artifacts connected to systems and networks (such as cyber-physical devices, including generators, radar systems, and physical control devices for airplanes, cars, and chemical manufacturing plants). Cyberattack mechanisms include the use of malware installation (sometimes combined with disruptive code and logic bombs), creation of botnets (that refer to a group of infected and controlled machines that send automated and senseless reports to a target computer), and installation of ransomware (that encrypts a device) (Reuter, Reference Reuter2020, pp. 16, 24–5, 113–14, 117, 140, 279–81). Malware refers to malicious software, which can attack, intrude, spy on, or manipulate computers. Botnets are made up of vast numbers of compromised computers that have been infected with malicious code and can be remotely controlled through Internet-based commands. Ransomware refers to malicious software that is installed on a computer, network, or service for extortion purposes, by encrypting the victim’s data or systems and making them unreadable such that the victim has to submit a monetary payment for decrypting files or regaining access.

2.4 Information Security Linkage to Artificial Intelligence

Technological development, particularly in the rapidly developing information technology realm, plays a crucial role in questions regarding cyber peace. Information technology is becoming omnipresent in the cases of resilience and of managing cyber conflicts. As the interdisciplinary field of cyber peace links more with technology, it is crucial to consider the ways that information technology assists and supports peace processes, as well as be cognizant of ways it can be a detriment.

Ever since information technology has created, moved, and processed data, the security of the data encountered challenges with policy and conflict resolution. In recent years, as advancements in information technology have increased connectivity, collaboration, and intelligence, these issues have become even more important. Information technology concerns information sharing and deterrence and implicates security concerns. As such, information technology security involves the preservation of confidentiality, integrity, availability, authenticity, accountability, and reliability. Relatedly, information technology can manipulate and anonymize data, and this feature can be used for a cyberattack (Gisel & Olejnik, Reference Gisel and Olejnik2008, pp. 14–17). The implication of this capability is attribution challenges. Attribution refers to the allocation of a cyberattack to a certain attacker toward providing real-world evidence for unveiling the identity of the attacker. AI makes it easier to identify or attribute a cyberattacker since it analyzes significantly higher number of attack indicators and discovers patterns (Payne, Reference Payne2018).

AI is poised to revolutionize cyber technological use in cyber peace, by providing faster, more precise, and more disruptive and anomalous capabilities (Stevens, Reference Stevens2020, pp. 1, 3, 4). AI can analyze data and trends to identify potential cyberattacks and provide offensive countermeasures to such attacks (Padrón & Ojeda-Castro, Reference Padrón and Ojeda-Castro2017, p. 4208). Moreover, AI presents the most powerful defensive capability in cybersecurity (Haney, Reference Haney2020, p. 3). While AI presents new technological capabilities to cyber conflict, it raises new considerations of what it might mean for human control, or lack thereof, and how it may help or hinder risks (Burton & Soare, Reference Burton and Soare2019, pp. 3–4). AI capabilities can undermine data integrity and present stealthy attacks that cause trust in organizations to falter and lead to systemic failures (Congressional Research Service, 2020, Summary). Nations could use AI to penetrate another nation’s computers or networks for the purposes of causing damage or disruption through manipulation and change (Taddeo & Floridi, Reference Taddeo and Floridi2018, pp. 1–2).

From an offensive standpoint, AI presents new considerations for cyber conflict, such as new manipulation or change capabilities that can allow for expert compromise of computer systems with minimal detection (Burton & Soare, Reference Burton and Soare2019, pp. 9–10). Adversarial AI impacts cyber conflict in three ways, including impersonation of trusted users, blending in the background by disguise and spreading itself in the digital environment, and faster attacks with more effective consequences. These capabilities provide motivation for the “defend forward” strategy of a preemptive instead of a reactive response to cyberattacks (Kosseff, Reference Kosseff2019, p. 3).

Additionally, AI makes deterrence possible since its algorithms can identify and neutralize the source without necessarily identifying the actor behind it, which makes it easier to thwart attacks. AI capabilities allow for going to the forefront of the cause or the conflict to analyze data and trends to identify potential attacks and provide countermeasures to such attacks.

3 Path toward AGI and Implications for Cyber Singularity

The technological development and advancement of AI presents challenges and lessons for governance frameworks. Social science research has been applied toward addressing governance gaps with AI, including polycentric governance and the resulting implications for policymakers (Shackelford & Dockery, Reference Shackelford and Dockery2019, pp. 6–7; Shackelford, Reference Shackelford2014, pp. 2, 4–5).

There is no single definition of AGI, but the general consensus is that AGI refers to machines gaining intelligence that is greater than that of humans (Payne, Reference Payne2018). When AGI is applied to cybersecurity, it has been termed cyber singularity, which presents superintelligence and amplification of human cognitive capabilities in cyberspace. The path toward AGI involves advancements in the form of a technological tool in a classical scenario and in the application of such a tool in novel situations.

The race to AGI involves the development of tools (mathematical techniques and software) used in classical cyber offense and cyber defense scenarios, but with increasing intelligence (Burton & Soare, Reference Burton and Soare2019, pp. 5–6). These represent technological attacks on computer networks, data, and infrastructure. While achieving AGI is a futuristic concept, advancements in sensory perception and natural language understanding will help transform AI into AGI and present new offensive and defensive capabilities in cyber peace. The offensive capabilities of AGI could involve sabotaging data, masking and hiding it being a cyberattack, and engaging in changing behaviors and contextualizing its threats. The defensive capabilities of AGI could involve automatically scanning for vulnerabilities in computer networks, gathering intelligence through the scanning of computer systems, and improving existing software and scripts. In both the offensive and defensive realm, AGI could manipulate humans or detect when humans were being manipulated and respond accordingly. Similar to an advanced form of psychological manipulation of behavioral advertising, AGI could conduct sophisticated manipulation of human decision-making in the midst of a cyber conflict and, in doing so, could amplify points of attack, coordinate resources, or stage attacks at scale (National Science & Technology Council, 2020, p. 7).

The race toward AGI also involves application of such tools in novel forms pertaining to cybersecurity (Geist, Reference Geist2016; Cave & ÓhÉigeartaigh, Reference Cave and ÓhÉigeartaigh2018). In additional to technological attacks on computer networks, data, and infrastructure, AGI could be applied to psychological manipulation in society to shape information in the political realm, the Internet, and social media with national cybersecurity implications. In the context of cybersecurity, AGI, as applied to manipulation of people with societal impact, includes shaping public understanding and political action that impacts national cybersecurity policy. Unlike the scenario of AGI as a technological tool, in a related manner, AGI as socio-political manipulator can provide an automated mass deception or mass data collection that implicates national cybersecurity and global perspectives. While not as direct an impact as a technological attack on computer networks, data, and infrastructure, this form of AGI provides manipulative messaging and interference in media, politics, and the public sphere, akin to the profiling and data analysis methods implemented in the Cambridge Analytica scandal.

In addition to the advancement of AI toward AGI for use as a technological tool, and its application to shape the socio-political information realm, AGI technological advancement in the form of cyber singularity would necessitate transformation of warfare approaches (Ivey, Reference Ivey2020, p. 110; O’Hanlon, Reference O’Hanlon2018). Cyber singularity, or the hypothetical point of AGI, becomes uncontrollable and irreversible in the cybersecurity realm and implicates international initiatives and policies (Priyadarshini & Cotton, Reference Priyadarshini and Cotton2020). The literal interpretation of cyber singularity concerns targeting weapons advancement with an offset strategy, or achieving technological superiority for deterrence effects. Similar to past offset strategies with nuclear weapons and information surveillance and stealth weapons, AGI for cyber singularity represents the next offset strategy. The strategic development and use of modern algorithms, data, and information on computer networks in the path toward AGI is critical in the AI arms race. In this sense, the world is at a critical stage in the strategic use of data and control of information on computer networks. As nations seek AGI capabilities in the AI arms race, policies that promote its development are of critical importance. A shared governance approach in some form should consider ways to offset the negative impact of nation-states’ economic competition to develop AGI.

4 Shared Governance of a Global Service AI Corps

The idea about the path toward AGI and implications of cyber singularity is that it might be possible to create a computational machine that vastly outperforms humans in cognitive areas of cybersecurity. Whereas current state of the art AI can apply to limited cybersecurity domains, AGI could also learn and expand into more cyber domains. The potential for AGI is speculative and the idea of cyber singularity is fuzzy since it is unclear what technologies are necessary for its realization. Thus, with an unclear understanding of the likelihood and function of cyber singularity, the technological development pathway raises a host of questions. By contrast, nations could foreseeably control governance strategies in relation to AGI. One potential option – that this chapter prescribes – is directing talent and human resources toward cooperation.

Nations that direct human capital resources in this way would allow for exerting control of human behavior in the arms race toward AGI and implications toward cyber singularity. Currently, there is a “brain drain” of AI talent that is largely employed by the private sector (Andress & Winterfeld, Reference Andress and Winterfeld2011, p. 248; Congressional Research Service, 2009, p. 22). A commission that recruits, develops, and retains AI talent, such as in the form of a reserve corps, could help to equalize the playing field in the AI arms race and transform governance away from state-centric approaches to AI. The facilitation of early global coordination among multiple stakeholders with common interests and sharing of best practices could prevent global catastrophic cybersecurity risks (Newman, Reference Newman2019, p. 4). Such a multistakeholder policy toward AI development represents a system flexible enough to adapt to new challenges and realities in a global system and toward cyber peace, potentially even forming the backbone of a Cyber Peace Corps (Shackelford, 2017). Given that AI technological development toward AGI has been under the purview of nations, the solution to the problem of an AI arms race toward cyber singularity needs to be rooted through multilateral networks.

The AI arms race has largely been framed by its economic impact rather than in shared governance structures. As a result, industrialized countries with strong software industries have continued to develop AI tools that have skewed the AI arms race. As AI and data implicate economic wealth and political influence, cyber peace policy conversations will need to consider the role and advancement of AI. The greatest threat to and the greatest opportunity for cyber peace could be AI technology, rather than other forces in the nations themselves.

Footnotes

3 Information Sharing as a Critical Best Practice for the Sustainability of Cyber Peace

1 The 2016 NIST Guide to Cyber Threat Information Sharing has noted the advantages of IS measures as a means of leveraging the collective knowledge, experience, and capabilities of both state and nonstate actors within the sharing community in order to enhance the capability of each to make informed decisions regarding development of policies, defensive capabilities, threat detection techniques, and mitigation strategies.

2 On information sharing as an enabler of trust building to resolve collective action problems see, for example, Ostrom et al. (Reference Ostrom, Chang, Pennington and Tarko1990) (“By voluntarily sharing the costs of providing information – a public good – participants learned that it was possible to accomplish some joint objectives by voluntary, cooperative action.”); and Ostrom et al. (Reference Ostrom, Chang, Pennington and Tarko2012), pp. 23, 79, 81–82, 88, and 93 (where IS constitutes an element of the Socio-Ecological System, or SES concept used by Elinor Ostrom to analyze ecosystems addressing a collective action problem).

3 See below for critique of polycentric governance models in the cybersecurity context in particular; cf. McGinnis (Reference McGinnis2016).

4 Such cross-sector cooperation for cybersecurity is becoming increasingly transparent. See, for instance, U.S. Department of Justice (September 16, 2020), and the diversity of participants in the EU’s Cyber and Information Domain Coordination Center (https://pesco.europa.eu/project/cyber-and-information-domain-coordination-center-cidcc/).

5 Of course, hostile cyber actors also engage in IS, an interesting issue beyond the present scope. See Hausken (Reference Hausken2015).

6 “Cybersecurity” describes the process of applying a “range of actions for the prevention, mitigation, investigation and handling of cyber threats and incidents, and for the reduction of their effects and of the damage caused by them prior, during and after their occurrence.” Israeli Government (2015, February 15).

7 There are also open-source sharing communities that make threat indicators publicly available, such as Citizen Lab Reports, (n.d.) and analyst reports that are openly shared online. Such public platforms are definitionally distinct from IS, which relies upon the existence of a closed, trusted community for its effectiveness.

8 FS-ISAC headquarters are located in the USA, with offices in the UK and Singapore.

9 “STIX is a language … for the specification, capture, characterization and communication of standardized cyber threat information. It does so in a structured fashion to support more effective cyber threat management processes and application of automation.” Barnum (Reference Barnum2014). See also Van Impe (Reference Van Impe2015, March 26).

10 Additional standards are MITRE’s Malware Attribute Enumeration and Characterization (MAEC) and OpenIOC, developed by Mandiant (Mavroedis & Bromander, Reference Mavroedis and Bromander2017).

11 The relevant NIS Annex, entitled “Requirements and Tasks of CSIRTs,” stipulates their monitoring of risks and incidents; the provision of alerts and other operative indicators; and support for incident response.

12 The well-known example of the 2017 breach into the Equifax credit reporting company illustrates the pitfalls that characterize the reluctance of some financial sector actors to engage effectively with IS. See Warren (Reference Warren2018). See also Fournoy & Sulmeyer (September/October, 2018).

13 One leading example can be seen in the USA, where the financial sector is defined as one of the sixteen included under the aegis of DHS and also subject to the directives of the US Department of Treasury and anti-money laundering reporting requirements.

14 Barring, of course, attacks which protected systems have been directed to ignore such as pentesting and friendly intrusions. These are not always transparent to IS participants.

15 Defined as “exchange between stakeholders of information about strategies, policies, legislation, best practices, and cyber infrastructure capacity building.” Forty-three out of the eighty-four included this measure.

16 Twenty-three out of the eighty-seven included this measure, and eighteen out of eighty-four included real-time 24/7 exchange of threat data.

17 These are: Information sharing, in general, sharing of information around cyber threats, law enforcement cooperation, protection of critical infrastructure, mechanisms for cooperation with the private sector and civil society, arrangements for international cooperation, a mechanism for vulnerability disclosure, regular dialogue, the mandating of general legislative measures, training of cyber personnel, cyber education programs, and conducting tabletop exercises.

18 Polycentricity is “a system of governance in which authorities from overlapping jurisdictions (or centers of authority) interact to determine the conditions under which these authorities, as well as the citizens subject to these jurisdictional units, are authorized to act as well as the constraints put upon their activities for public purposes.” McGinnis (Reference McGinnis2011), pp. 171–72. See also Black (Reference Black2008), p. 139 (“‘Polycentric regulation’ is a term which acts … to draw attention to the multiple sites in which regulation occurs at sub-national, national and transnational levels.”)

19 Specifically, key parameters include the explicit inclusion of a multiplicity and diversity of trusted participants, and a range of regulatory incentives, tools and measures employed for IS. These might encompass, inter alia, national laws, sectoral self-regulation, best practices, guidelines, standards, international agreements, public–private partnerships, academic and consulting reports, and other types of regulation through information sharing. On the other hand, some drawbacks to the polycentric approach include fragmentation, “gridlock,” inconsistency, and “the difficult task of getting diverse stakeholders to work well together across sectors and borders.”

21 A key challenge in this context is the evolution of full, mutual IS, and not only unilateral reporting of risks on the part of individuals to their banks, social media platforms, and consumer platforms.

22 On aspects of due diligence in the context of international cyber law, see Tallinn 2.0, Rules 6 and 7 at 30–50, and Rule 6 at 288.

4 De-escalation Pathways and Disruptive Technology Cyber Operations as Off-Ramps to War

1 See JP 3-12 Cyber Operations: www.jcs.mil/Portals/36/Documents/Doctrine/pubs/jp3_12.pdf. Of note, at the apex of national security, decision makers also weigh political gain/loss (PGL) and technical gain/loss (TGL).

2 Blue networks are home networks, gray networks are unallied network spaces, and red networks are opposition systems.

3 Participants were encouraged to identify gender based on preference and leave it blank if they were gender fluid in most settings to create a safe, inclusive environment.

4 Participants were encouraged to fill out this option only if they felt comfortable to preserve maximum anonymity and create a safe, inclusive space.

5 For this test, the escalation measure was the coercive potential and whether any instrument selected was greater than 3 on the previously discussed Likert scale for each instrument of power.

6 For an overview of US intelligence estimates during this period, see a 1985 declassified CIA study: www.cia.gov/library/readingroom/docs/CIA-RDP86T00587R000200190004-4.pdf.

7 This analysis focuses on the context of the dyadic rivalry and does not address the role of Israel and other US security partners in the Middle East, such as Saudi Arabia.

8 For a timeline of Iranian nuclear efforts and related diplomacy, see the Arms Control Association Timeline (updated September 2020): www.armscontrol.org/factsheets/Timeline-of-Nuclear-Diplomacy-With-Iran.

9 A list of all US sanctions can be found at a US State Department resource (www.state.gov/iran-sanctions/). Sanctions were already fairly extensive in the summer of 2019 and the United States only added targeted sanctions against industries and various actors after the downing of the US Global Hawk.

5 Cyber Peace and Intrastate Armed Conflicts Toward Cyber Peacebuilding?

6 Artificial Intelligence in Cyber Peace

References

References

Ablon, L., & Bogart, A. (2017). Zero days, thousands of nights: The life and times of zero-Day vulnerabilities and their exploits. RAND Corporation. www.rand.org/pubs/research_reports/RR1751.htmlGoogle Scholar
Barford, P., Dacier, M., Dietterich, T., Fredrikson, M., Giffin, J., Jajodia, S. et al. (2010). Cyber SA: Situational awareness for cyber defense. In Jajodia, S., Liu, P., Swarup, V., & Wang, C. (Eds.), Cyber situational awareness, advances in information security (pp. 3–13).Google Scholar
Barlow, J. P. (1996). Declaration of the independence of cyberspace.CrossRefGoogle Scholar
Barnum, B. (2014). Standardizing cyber threat intelligence information with the structured threat information eXpression (STIX™). http://stixproject.github.io/about/STIX_Whitepaper_v1.1.pdfGoogle Scholar
Black, J. (2008). Constructing and contesting legitimacy and accountability in polycentric regulatory regimes. Regulation & Governance, 2(2), 137–164.Google Scholar
Borghard, E., & Lonergan, S. (2018). Confidence building measures for the cyber domain. Strategic Studies Quarterly, 12(3), 10–49. www.airuniversity.af.edu/Portals/10/SSQ/documents/Volume-12_Issue-3/Borghard-Lonergan.pdf?ver=fvEYs48lWSdmgIJlcAxPkA%3d%3dGoogle Scholar
Chabrow, E. (2015, March 15). Cyberthreat information sharing privacy concerns raised. BankInfoSecurity. www.bankinfosecurity.com/privacy-risks-raised-over-cyberthreat-information-sharing-a-8970Google Scholar
Cision. (2018, June 11). FS-ISAC launches the CERES forum: World’s First Threat Information Sharing Group for Central Banks, Regulators and Supervisors. www.prnewswire.com/news-releases/fs-isac-launches-the-ceres-forum-worlds-first-threat-information-sharing-group-for-central-banks-regulators-and-supervisors-300663921.htmlGoogle Scholar
Citizen Lab Reports. (n.d.). Targeted threats. Retrieved October 24, 2020 from https://citizenlab.ca/category/research/targeted-threats/Google Scholar
Convention on Cybercrime. (2001, November 23). E.T.S. No. 185.Google Scholar
Craig, A., & Shackelford, S. (2015). Hacking the planet, the Dalai Lama, and You: Managing technical vulnerabilities in the internet through polycentric governance. Fordham Intellectual Property, Media & Entertainment Law Journal, 24(2), 381–425.Google Scholar
Cyber Security Intelligence. (2017, May 1). The cyber security threats that keep banks alert. www.cybersecurityintelligence.com/blog/the-cybersecurity-threats-that-keep-banks-alert-2392.htmlGoogle Scholar
Cybersecurity Act of 2018. (2018, May 23). www.riigiteataja.ee/en/eli/523052018003/consolideGoogle Scholar
Cybersecurity and Infrastructure Agency. (2020a, April 8). Alert (AA20-009A): Covid-19 exploited by malicious cyber actors. www.us-cert.gov/ncas/alerts/aa20-099aGoogle Scholar
Cybersecurity and Infrastructure Agency. (2020b, January 14). Alert (AA20-014A): Critical vulnerabilities in microsoft windows operating system. www.us-cert.gov/ncas/alerts/aa20-014aGoogle Scholar
Deljoo, A., van Engers, T., Koning, R., Gommans, L., & de Laat, C. (2018). Towards trustworthy information sharing by creating cyber security alliances. 2018 17th IEEE International Conference on Trust, Security and Privacy in Computing and Communications/12th IEEE International Conference on Big Data Science and Engineering (TrustCom/BigDataSE), 1506–1510.Google Scholar
Directive 2016/1148, of the European Parliament and of the Council of 6 July 2016 concerning measures for a high common level of security of network and information systems across the union, 2016 O.J. (L194) 1.Google Scholar
Efroni, D., & Shany, Y. (2018). A rule book on the shelf? Tallinn Manual 2.0 on cyber operations and subsequent state practice. American Journal of International Law, 112(4), 583–657.Google Scholar
Elkin-Koren, N. (1998). Copyrights in cyberspace – Rights without laws. Chicago-Kent Law Review, 73(4), 1156–1201.Google Scholar
European Union Network and Information Security Agency. (2018). Cooperative models for Information Sharing and Analysis Centers (ISACs). www.enisa.europa.eu/publications/information-sharing-and-analysis-center-isacs-cooperative-modelsGoogle Scholar
Europol. (2017, December 4). Andromeda botnet dismantled in international cyber operation. [Press release]. www.europol.europa.eu/newsroom/news/andromeda-botnet-dismantled-in-international-cyber-operationGoogle Scholar
FBI. (2020, May 8). The FBI’s Internet Crime Complaint Center (IC3) marks its 20th Year [Press release]. www.fbi.gov/news/pressrel/press-releases/the-fbis-internet-crime-complaint-center-ic3-marks-its-20th-yearGoogle Scholar
Finnemore, M., & Hollis, D. (2016). Constructing norms for global cybersecurity. American Journal of International Law, 110(3), 425–479.Google Scholar
Fortinet. (2017, February 14). Cyber threat alliance expands mission through appointment of President, formal incorporation as not-for-profit and new founding members [Press release]. www.fortinet.com/ru/corporate/about-us/newsroom/press-releases/2017/cyber-threat-alliance-expands-mission.htmlGoogle Scholar
Fournoy, M., & Sulmeyer, M. (2018, September/October). Battlefield internet: A plan to secure cyberspace. Foreign Affairs. www.foreignaffairs.com/articles/world/2018-08-14/battlefield-internet.Google Scholar
Garrido-Pelaz, R., González-Manzano, L., & Pastrana, S. (2016). Shall we collaborate? A model to analyse the benefits of information sharing [Workshop presentation]. Proceedings of the 2016 ACM on Workshop on Information Sharing and Collaborative Security.Google Scholar
Gill, R., & Thompson, M. (2016). Trust and information sharing in multinational-multiagency teams. Springer.Google Scholar
Global Internet Forum to Counter Terrorism. (n.d.). Joint tech innovation. Retrieved October 24, 2020 from https://gifct.org/joint-tech-innovation/Google Scholar
Harkins, M. W. (2016). Managing risk and information security. Apress.CrossRefGoogle Scholar
Hathaway, M. (2010, May 7). Why successful partnerships are critical for promoting cybersecurity. Executive Biz.Google Scholar
Hausken, K. (2015). A strategic analysis of information sharing among cyber hackers. Journal of Information Systems and Technology Management, 12.Google Scholar
He, M., Devine, L., & Zhuang, J. (2018). Perspectives on cybersecurity information sharing among multiple stakeholders using a decision-theoretic approach. Risk Analysis, 38(2), 215–225.Google Scholar
Housen-Couriel, D. (2017). An analytical review of and comparison of operative measures included in cyber diplomatic initiatives (GCSC Issue Brief No. 1). Global Commission on the Security of Cyberspace.Google Scholar
Housen-Couriel, D. (2018). Information sharing for mitigation of hostile activity in cyberspace (Part 1). European Cybersecurity Journal, 4(3), 44–50.Google Scholar
Housen-Couriel, D. (2019). Information sharing for mitigation of hostile activity in cyberspace (Part 2). European Cybersecurity Journal, 5(1), 16–24.Google Scholar
International Standards Organization. (2015). ISO/IEC 27010:2015, Information Technology – Security Techniques – Information security management for inter-sector and inter-organizational communications. www.iso.org/standard/44375.htmlGoogle Scholar
Israel Cyber Directorate. (2019). Israel’s ‘Showcase’ for evaluation of cyber risks. www.gov.il/he/departments/general/systemfororgGoogle Scholar
Israeli Government. (2015, February 15). Resolution No. 2444, advancing the national preparedness for cyber security.Google Scholar
Johnson, C., Badger, L., Waltermire, D., Snyder, J., & Skorupka, C. (2016). Guide to cyber information threat sharing (NIST Special Pub. 800-150). National Institute of Standards & Technology. http://dx.doi.org/10.6028/NIST.SP.800-150Google Scholar
Kikuchi, M., & Okubo, T. (2020). Building cybersecurity through polycentric governance. Journal of Communications, 15, 390–397.Google Scholar
Klimburg, A. (2018). The darkening web: The war for cyberspace. Penguin Books.Google Scholar
Knerr, M. (2017). Password please: The effectiveness of New York’s first-in-nation cybersecurity regulation of banks. Business Entrepreneurship & Tax Law Review, 1(2), 539–555.Google Scholar
Lin, M. J. J., Hung, S. W., & Chen, C.J. (2009). Fostering the determinants of knowledge sharing in professional virtual communities. Computers in Human Behavior, 25(4), 929–939.Google Scholar
Liu, C. Z., Zafar, H., & Au, Y. (2014). Rethinking FS-ISAC: An IT security information sharing network model for the financial services sector. Communications of the Association for Information Systems, 34(1).Google Scholar
Lubin, A. (2019, September 21). The insurability of cyber risk [Unpublished manuscript]. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3452833Google Scholar
Macak, K. (2016). Is the international law of cyber security in crisis? In Pissanidis, N., Rõigas, H., & Veenendaal, M. (Eds.), Cyber power (pp. 127–140). NATO CCD COE Publications.Google Scholar
Macak, K. (2017). From cyber norms to cyber rules: Re-engaging states as law-makers. Leiden Journal International Law, 30(4), 877–899.Google Scholar
Mavroedis, V., & Bromander, S. (2017). Cyber threat intelligence model: An evaluation of taxonomies, sharing standards and ontologies within cyber threat intelligence. IEEE 2017 European Intelligence and Security Informatics Conference, 91–98.Google Scholar
McGinnis, M. (2011). An introduction to IAD and the language of the Ostrom Workshop: A simple guide to a complex framework. Policy Studies Journal, 39(1), 169–183.Google Scholar
McGinnis, M. (2016). Polycentric governance in theory and practice: Dimensions of aspiration and practical limitations. https://mcginnis.pages.iu.edu/polycentric%20governance%20theory%20and%20practice%20Feb%202016.pdfGoogle Scholar
Ministry of Finance and the Cyber Directorate. (2017, September 4). Memorandum from the finance cyber and continuity centre (FC3). https://docs.google.com/viewer?url=http%3A%2F%2Fwww.export.gov.il%2Ffiles%2Fcyber%2FFC3.PDF%3Fredirect%3DnoGoogle Scholar
National Cyber Security Centre. (n.d.). CiSP terms and conditions (v.5). www.ncsc.gov.uk/files/UK%20CISP%20Terms%20and%20Conditions%20v5.0.pdfGoogle Scholar
Nicholas, P. (2017, June 29). What are confidence building measures (CBMs) and how can they improve cybersecurity? Microsoft. www.microsoft.com/en-us/cybersecurity/blog-hub/CMB-and-cybersecurityGoogle Scholar
Organization for Security and Co-Operation in Europe. (2013). OSCE guide on non-military confidence-building measures (CBMs). www.osce.org/secretariat/91082Google Scholar
Organization for Security and Co-Operation in Europe. (2016, March). Decision No. 1202, confidence-building measures to reduce the risks of conflict stemming from the use of information and communication technologies. https://ccdcoe.org/incyder-articles/osce-expands-its-list-of-confidence-building-measures-for-cyberspace-common-ground-on-critical-infrastructure-protection/Google Scholar
Ostrom, E., Chang, C., Pennington, M., & Tarko, V. (1990). Governing the commons: The evolution of institutions for collective action. Cambridge University Press.CrossRefGoogle Scholar
Ostrom, E., Chang, C., Pennington, M., & Tarko, V. (2012). The future of the commons: Beyond market failure and government regulation. The Institute of Economic Affairs.Google Scholar
Oudkerk, S., & Wrona, K. (2013). Using NATO labelling to support controlled information sharing between partners. In Luiijf, E. & Hartel, P. (Eds.), Critical information infrastructures security, lecture notes in computer science (Vol. 8328). Springer Link.Google Scholar
Özalp, Ö., Zheng, Y., & Ren, Y. (2014). Trust, trustworthiness, and information sharing in supply chains bridging China and the United States. Management Science, 60(10), 2435–2460. https://doi.org/10.1287/mnsc.2014.1905Google Scholar
Paris Call for Trust and Security in Cyberspace. (2018, November 12). https://pariscall.international/en/Google Scholar
Pawlak, P. (2016). Confidence building measures in cyberspace: Current debates and trends. In Osula, A.-M. & Rõigas, H. (Eds.), International cyber norms: Legal, policy & industry perspectives (pp. 129–153). CCDCOE.Google Scholar
Powell, B. (2005). Is cybersecurity a public good? Evidence from the financial services industry. Journal of Law, Economics & Policy, 1(2), 497–510.Google Scholar
Presidential Decision Directive PDD/NSC 63. (1998, May 22). https://fas.org/irp/offdocs/pdd/pdd-63.htmGoogle Scholar
Robinson, N. (2012). Information sharing for CIP: Between policy, theory, and practice. In Laing, C., Baadi, A., & Vickers, P. (Eds.), Securing critical infrastructures and critical control systems: Approaches for threat protection. IGI Global.Google Scholar
Robinson, N., & Disley, E. (2010). Incentives and challenges for information sharing in the context of network and information security. European Union Network and Information Security Agency. www.enisa.europa.eu/publications/incentives-and-barriers-to-information-sharingGoogle Scholar
Ruhl, C., Hollis, D., Hoffman, W., & Maurer, T. (2020). Cyberspace and geopolitics: Assessing global cybersecurity norm processes at a crossroads. Carnegie Endowment for International Peace. https://carnegieendowment.org/2020/02/26/cyberspace-and-geopolitics-assessing-global-cybersecurity-norm-processes-at-crossroads-pub-81110Google Scholar
Schmitt, M. (Ed.). (2017). Tallinn Manual 2.0 on the international law applicable to cyber operations (2nd ed.). Cambridge University Press.Google Scholar
Schneier, B. (2020, January 15). Critical windows vulnerability discovered by NSA. Schneier on Security. www.schneier.com/blog/archives/2020/01/critical_window.htmlGoogle Scholar
Shackelford, S. (2013). Toward cyberpeace: Managing cyberattacks through polycentric governance. American University Law Review, 62(5), 1273–1364.Google Scholar
Shackelford, S. (2014). Managing cyber attacks in international law, business, and relations: In search of cyber peace. Cambridge University Press.Google Scholar
Shackelford, S. (2016). Protecting intellectual property and privacy in the digital age: The use of national cybersecurity strategies to mitigate cyber risk. Chapman Law Review, 19(2), 445–482.Google Scholar
Shu-yun, Z., & Neng-hua, C. (2007). The collision and balance of information sharing and intellectual property protection. http://en.cnki.com.cn/Article_en/CJFDTOTAL-TSGL200702010.htmGoogle Scholar
Skopik, F., Settanni, G., & Fiedler, R. (2016). A problem shared is a problem halved: A survey on the dimensions of collective cyber defense through security information sharing. Computers & Security, 60, 154–176.Google Scholar
Sutton, D. (2015). Trusted information sharing for cyber situational awareness. E & I Elektrotechnik und Informationstechnik, 132(2), 113–116.CrossRefGoogle Scholar
Thiel, A., Garrick, D., & Blomquist, W. (Eds.). (2019). Governing complexity: Analyzing and applying polycentricity. Cambridge University Press.Google Scholar
United Nations General Assembly. (2015, July 22). Report A/70/174: Report of the group of governmental experts on developments in the field of information and telecommunications in the context of international security. http://undocs.org/A/70/174Google Scholar
U.S. Department of Homeland Security. (2020, March 30). Bulletin SB-20-097. www.us-cert.gov/ncas/bulletins/sb20-097Google Scholar
U.S. Department of Justice. (2020, September 16). Seven international cyber defendants, including “Apt 41” actors, charged in connection with computer intrusion campaigns against more than 100 victims globally [Press Release]. www.justice.gov/opa/pr/seven-international-cyber-defendants-including-apt41-actors-charged-connection-computerGoogle Scholar
Van Impe, K. (2015, March 26). How STIX, TAXII and CyBox can help with standardizing threat information. Security Intelligence. https://securityintelligence.com/how-stix-taxii-and-cybox-can-help-with-standardizing-threat-information/Google Scholar
Vavra, S. (2019, October 22). Why did cyber command back off its recent plans to call out North Korean hacking? Cyber Scoop. www.cyberscoop.com/cyber-command-north-korea-lazarus-group-fastcash/Google Scholar
Wagner, T., Mahbub, K., Palomar, E., & Abdallah, A. (2019). Cyber threat intelligence sharing: Survey and research directions. Computers & Security, 87.Google Scholar
Warren, E. (2018). Bad credit: Uncovering Equifax’ failure to protect Americans’ personal information. Office of Senator Elizabeth Warren. www.warren.senate.gov/files/documents/2018_2_7_%20Equifax_Report.pdfGoogle Scholar
Weiss, N. E. (2015, June 3). Legislation to facilitate cybersecurity information sharing: Economic analysis. Congressional Research Service. No. R43821.Google Scholar
Wendt, D. (2019a). Addressing both sides of the cybersecurity equation. Journal of Cyber Security and Information Systems, 7(2).Google Scholar
Wendt, D. (2019b). Exploring the strategies cybersecurity specialists need to improve adaptive cyber defenses within the financial sector: An exploratory study [unpublished doctoral dissertation]. Colorado Technical University.Google Scholar
Wisniewski, C. (2020, January 23). Looking for silver linings in the CVE 2020-0601 crypto vulnerability. Naked Security. https://nakedsecurity.sophos.com/2020/01/23/looking-for-silver-linings-in-the-cve-2020-0601-crypto-vulnerability/Google Scholar
World Economic Forum. (2019). Global risks report. www.weforum.org/reports/the-global-risks-report-2019Google Scholar
Zheng, D., & Lewis, J. (2015). Cyber threat information sharing. Center for Strategic and International Studies. https://csis-website-prod.s3.amazonaws.com/s3fs-public/legacy_files/files/publication/150310_cyberthreatinfosharing.pdfGoogle Scholar

References

Al-Aloosy, M. (2020). The changing ideology of Hezbollah. Springer.Google Scholar
Axelrod, R. (1984). The evolution of cooperation. Basic Books.Google Scholar
Axelrod, R., & Hamilton, W. D. (1981). The evolution of cooperation. Science, 211(4489), 1390–1396.Google Scholar
Axelrod, R., & Keohane, R. O. (1985). Achieving cooperation under anarchy: Strategies and institutions. World Politics, 38(1), 226–254.Google Scholar
Barnes, J. E. (2019a, December 27). American contractor killed in rocket attack in Iraq. New York Times. www.nytimes.com/2019/12/27/us/politics/american-rocket-attack-iraq.htmlGoogle Scholar
Barnes, J. E. (2019b, August 28). U.S. cyberattack hurt Iran’s ability to target oil tankers, officials say. New York Times. www.nytimes.com/2019/08/28/us/politics/us-iran-cyber-attack.htmlGoogle Scholar
Barzashka, I. (2013). Are cyber-weapons effective? Assessing Stuxnet’s impact on the Iranian enrichment programme. The RUSI Journal, 158(2), 48–56.Google Scholar
Beardsley, K., & Asal, V. (2009a). Nuclear weapons as shields. Conflict Management and Peace Science, 26(3), 235–255.Google Scholar
Beardsley, K., & Asal, V. (2009b). Winning with the bomb. Journal of Conflict Resolution, 53(2), 278–301.Google Scholar
Bolton, J. (2019, May 5). Statement from the National Security Advisor Ambassador John Bolton. White House. www.whitehouse.gov/briefings-statements/statement-national-security-advisor-ambassador-john-bolton-2/Google Scholar
Booth, K., & Wheeler, N. (2007). The security dilemma: Fear, cooperation, and trust in world politics. Springer Nature.Google Scholar
Borghard, E. D., & Lonergan, S. W. (2017). The logic of coercion in cyberspace. Security Studies, 26(3), 452–481.Google Scholar
Braithwaite, A., & Lemke, D. (2011). Unpacking escalation. Conflict Management and Peace Science, 28(2), 111–123.CrossRefGoogle Scholar
Buchanan, B. (2016). The cybersecurity dilemma: Hacking, trust, and fear between nations. Oxford University Press.Google Scholar
Carson, A. (2020). Secret wars: Covert conflict in international politics. Princeton University Press.Google Scholar
Clarke, R. A., & Knake, R. K. (2014). Cyber war. Old Saybrook: Tantor Media, Incorporated.Google Scholar
Craig, A., & Valeriano, B. (2016). Conceptualising cyber arms races [Manuscript]. 8th International Conference on Cyber Conflict Tallinn, Estonia.Google Scholar
Crowley, M. (2020, May 6). Trump vetoes measure demanding congressional approval for Iran conflict. New York Times. www.nytimes.com/2020/05/06/us/politics/trump-vetoes-iran-war-powers.htmlGoogle Scholar
Dunning, T. (2016). Transparency, replication, and cumulative learning: What experiments alone cannot achieve. Annual Review of Political Science, 19(1), 541–563.Google Scholar
Eckstein, H. (1975). Case studies and theory in political science. In Greenstein, F. & Polsby, N. (Eds.), Handbook of political science (vol. 7, pp. 79–138). Reading, MA: Addison-Wesley.Google Scholar
Eisenstadt, M. (2016). Iran’s lengthening cyber shadow. Washington Institute for Near East Policy.Google Scholar
Eisenstadt, M. (2017). Iran after sanctions: Military procurement and force-structure decisions. International Institute for Strategic Studies. www.washingtoninstitute.org/uploads/Documents/opeds/Eisenstadt20171219-IISS-chapter.pdfGoogle Scholar
Fearon, J. D. (1995). Rationalist explanations for war. International Organization, 49(3), 379–414.Google Scholar
Fitzpatrick, M. (2017). Assessing the JCPOA. Adelphi Series, 57(466–467), 19–60.Google Scholar
Gartzke, E., & Lindsay, J. R. (2019). Cross-domain deterrence: Strategy in an era of complexity. Oxford University Press.Google Scholar
Glaser, C. L. (1997). The security dilemma revisited. World Politics, 50(1), 171–201.Google Scholar
Healey, J., & Grindal, K. (2013). A fierce domain: Conflict in cyberspace, 1986 to 2012. Cyber Conflict Studies Association.Google Scholar
Hensel, P. R., & Mitchell, S. M. (2017). From territorial claims to identity claims: The Issue Correlates of War (ICOW) Project. Conflict Management and Peace Science, 34(2), 126–140.Google Scholar
Herz, J. H. (1950). Idealist internationalism and the security dilemma. World Politics, 2(2), 157–180.Google Scholar
Hubbard, B., Karasz, P., & Reed, S. (2019, September 14). Two major Saudi oil installations hit by drone strike, and U.S. blames Iran. New York Times. www.nytimes.com/2019/09/14/world/middleeast/saudi-arabia-refineries-drone-attack.htmlGoogle Scholar
Huh, Y. E., Vosgerau, J., & Morewedge, C. K. (2016). More similar but less satisfying: Comparing preferences for and the efficacy of within-and cross-category substitutes for food. Psychological Science, 27(6), 894–903.Google Scholar
Huth, P. K. (1999). Deterrence and international conflict: Empirical findings and theoretical debates. Annual Review of Political Science, 2(1), 25–48.Google Scholar
Hyde, S. D. (2015). Experiments in international relations: Lab, survey, and field. Annual Review of Political Science, 18(1), 403–424.Google Scholar
Jensen, B. (2017). The cyber character of political warfare. The Brown Journal of World Affairs, 24(1), 159.Google Scholar
Jensen, B., & Valeriano, B. (2019a, March 27). Cyber escalation dynamics: Results from war game experiments international studies association. Annual Meeting Panel: War Gaming and Simulations in International Conflict.Google Scholar
Jensen, B., & Valeriano, B. (2019b). What do we know about cyber escalation? Observations from simulations and surveys. Atlantic Council. www.atlanticcouncil.org/wp-content/uploads/2019/11/What_do_we_know_about_cyber_escalation_.pdfGoogle Scholar
Jensen, B., & Work, J. D. (2018, September 4). Cyber civil-military relations: Balancing interests on the digital frontier. War on the Rocks. https://warontherocks.com/2018/09/cyber-civil-military-relations-balancing-interests-on-the-digital-frontier/Google Scholar
Jervis, R. (1978). Cooperation under the security dilemma. World Politics, 30(2), 167–214.Google Scholar
Jervis, R. (2017). Perception and misperception in international politics. Princeton University Press.Google Scholar
Johnston, T., Lane, M., Casey, A., Williams, H. J., Rhoades, A. L., Sladden, J., Vest, N., Reimer, J. R., & Haberman, R. (2020). Could the Houthis be the next Hizballah? Iranian proxy development in Yemen and the future of the Houthi movement. RAND Corporation.Google Scholar
Kaplan, F. (2016). Dark territory: The secret history of cyber war. Simon & Schuster.Google Scholar
Kello, L. (2017). The virtual weapon and international order. Yale University Press.Google Scholar
Kinzer, S. (2008). All the Shah’s men: An American coup and the roots of Middle East terror. John Wiley & Sons.Google Scholar
Kirkpatrick, D. D., Perez-Pena, R., & Reed, S. (2019, June 13). Tanks are attacked in the Mideast, and U.S. says video shows Iran war involved. New York Times. www.nytimes.com/2019/06/13/world/middleeast/oil-tanker-attack-gulf-oman.htmlGoogle Scholar
Kostyuk, N., & Zhukov, Y. M. (2019). Invisible digital front: Can cyber attacks shape battlefield events? Journal of Conflict Resolution, 63(2), 317–347.Google Scholar
Kreps, S., & Schneider, J. (2019). Escalation firebreaks in the cyber, conventional, and nuclear domains: Moving beyond effects-based logics. Journal of Cybersecurity, 5(1), 1–11. https://doi.org/10.1093/cybsec/tyz007Google Scholar
Krugman, P., & Wells, R. (2008). Microeconomics. Macmillan.Google Scholar
Krugman, P. R., Robin, W., & Olney, M. L. (2008). Fundamentals of economics. Reversed.Google Scholar
Kube, C. (2019, June 6). U.S. Commander says American Forces face “Imminent” threat from Iran. NBC News. www.nbcnews.com/news/military/u-s-commander-says-american-forces-face-imminent-threat-iran-n1014556Google Scholar
Lerner, K. L. (2020). The American Assassination of Iranian Gen. Qassem Soleimani: Strategic Implications, Asymmetrical Threat Risks, and US Congressional Reporting Requirements. Taking Bearings.Google Scholar
Levy, J. S. (2008). Case studies: Types, designs, and logics of inference. Conflict Management and Peace Science, 25(1), 1–18.Google Scholar
Libicki, M. C. (2012). Crisis and escalation in cyberspace. RAND Corporation.Google Scholar
Lin-Greenberg, E., Pauly, R., & Schneider, J. (2020, August 18). Wargaming for political science research. SSRN. http://dx.doi.org/10.2139/ssrn.3676665Google Scholar
Lindsay, J. R. (2013). Stuxnet and the limits of cyber warfare. Security Studies, 22(3), 365–404.Google Scholar
Lindsay, J. R., & Gartzke, E. (2018). Coercion through cyberspace: The stability-instability paradox revisited. In Greenhill, K. M. & Krause, P. (Eds.), Coercion: The power to hurt (pp. 179–203). Oxford University Press.Google Scholar
Maness, R., Valeriano, B., & Jensen, B. (2019). Dyadic cyber incident and campaign dataset (Version 1.5) [Data File].Google Scholar
Marshall, A. (1890). The principles of economics. McMaster University Archive for the History of Economic Thought.Google Scholar
Martelle, M. (2018, August 13). Joint Task Force ARES and Operation GLOWING SYMPHONY: Cyber Command’s Internet War against ISIL. National Security Archive. https://nsarchive.gwu.edu/briefing-book/cyber-vault/2018-08-13/joint-task-force-ares-operation-glowing-symphony-cyber-commands-internet-war-against-isilGoogle Scholar
Milner, H. V., & Tingley, D. H. (2011). Who supports global economic engagement? The sources of preferences in American foreign economic policy. International Organization, 65(1), 37–68.Google Scholar
Most, B. A., & Starr, H. (1983). International relations theory, foreign policy substitutability, and nice laws. World Politics, 36(3), 383–406.Google Scholar
Most, B. A., & Starr, H. (2015). Inquiry, logic, and international politics: With a new preface by Harvey Starr. University of South Carolina Press.Google Scholar
Mousavian, S. H., & Toossi, S. (2017). Assessing US–Iran nuclear engagement. The Washington Quarterly, 40(3), 65–95.Google Scholar
Nasri, F. (1983). Iranian studies and the Iranian Revolution. World Politics, 35(4), 607–630.Google Scholar
Newman, L. H. (2019, June 20). The drone Iran shot down was a $220M surveillance monster. Wired. www.wired.com/story/iran-global-hawk-drone-surveillance/Google Scholar
News, B. (2018, August 7). Iran sanctions: Trump warns trading partners. BBC News. www.bbc.com/news/world-us-canada-45098031Google Scholar
Olorunnipa, T., Dawsey, J., Demirjian, K., & Lamothe, D. (2019, June 21). “I stopped it”: Inside Trump’s last-minute reversal on striking Iran. The Washington Post. www.washingtonpost.com/politics/i-stopped-it-inside-trumps-last-minute-reversal-on-striking-iran/2019/06/21/e016effe-9431-11e9-b570-6416efdc0803_story.htmlGoogle Scholar
Palmer, G., & Bhandari, A. (2000). The investigation of substitutability in foreign policy. Journal of Conflict Resolution, 44(1), 3–10.Google Scholar
Pauly, R. B. (2018). Would US leaders push the button? Wargames and the sources of nuclear restraint. International Security, 43(2), 151–192.Google Scholar
Perla, P. P. (1990). The art of wargaming: A guide for professionals and hobbyists. Naval Institute Press.Google Scholar
Pomerleau, M., & Eversden, A. (2019, June 24). What to make of US cyber activities in Iran. Fifth Domain. www.fifthdomain.com/dod/2019/06/25/why-trump-may-have-opted-for-a-cyberattack-in-iran/Google Scholar
Powell, R. (2002). Bargaining theory and international conflict. Annual Review of Political Science, 5(1), 1–30.Google Scholar
Pytlak, A., & Mitchell, G. E. (2016). Power, rivalry and cyber conflict: An empirical analysis. In Fris, K. & Ringsmose, J. (Eds.),Conflict in cyber space: Theoretical, strategic and legal perspectives (pp. 81–98). Routledge.Google Scholar
Ramazani, R. K. (1989). Iran’s foreign policy: Contending orientations. Middle East Journal, 43(2), 202–217.Google Scholar
Reddie, A. W., Goldblum, B. L., Lakkaraju, K., Reinhardt, J., Nacht, M., & Epifanovskaya, L. (2018). Next-generation wargames. Science, 362(6421), 1362–1364.Google Scholar
Renshon, J. (2015). Losing face and sinking costs: Experimental evidence on the judgment of political and military leaders. International Organization, 69(3), 659–695.Google Scholar
Reynolds, N. (2019). Putin’s Not-so-secret Mercenaries: Patronage, geopolitics, and the Wagner group. Carnegie Endowment for International Peace. https://carnegieendowment.org/2019/07/08/putin-s-not-so-secret-mercenaries-patronage-geopolitics-and-wagner-group-pub-79442Google Scholar
Rid, T. (2020). Active measures: The secret history of disinformation and political warfare. Farrar, Straus and Giroux.Google Scholar
Roff, H. (2016, September 28). “Weapons autonomy risk is rocketing.” Foreign Policy. https://foreignpolicy.com/2016/09/28/weapons-autonomy-is-rocketing/Google Scholar
Rovner, J. (2019, September 16). Cyber war as an intelligence contest. War on the Rocks. https://warontherocks.com/2019/09/cyber-war-as-an-intelligence-contest/Google Scholar
Sample, S. G. (1997). Arms races and dispute escalation: Resolving the debate. Journal of Peace Research, 34(1), 7–22.Google Scholar
Schelling, T. (1960). The strategy of conflict. Cambridge: Harvard University Press.Google Scholar
Schelling, T. C. (1958). The strategy of conflict. Prospectus for a reorientation of game theory. Journal of Conflict Resolution, 2(3), 203–264.Google Scholar
Schelling, T. C. (1966). Arms and influence. New Haven: Yale University Press.Google Scholar
Schelling, T. C. (2020). Arms and influence. Yale University Press.Google Scholar
Schmitt, E., & Barnes, J. E. (2019, May 13). White House reviews military plans against Iran, in echoes of Iraq war. New York Times. www.nytimes.com/2019/05/13/world/middleeast/us-military-plans-iran.htmlGoogle Scholar
Schneider, J. (2017). Cyber and crisis escalation: Insights from Wargaming. USASOC Futures Forum. https://paxsims.files.wordpress.com/2017/01/paper-cyber-and-crisis-escalation-insights-from-wargaming-schneider.pdfGoogle Scholar
Schneider, J. (2019). The capability/vulnerability paradox and military revolutions: Implications for computing, cyber, and the onset of war. Journal of Strategic Studies, 42(6), 841–863.Google Scholar
Sechser, T. S., & Fuhrmann, M. (2017). Nuclear weapons and coercive diplomacy. Cambridge University Press.Google Scholar
Shay, S. (2017). The axis of evil: Iran, Hizballah, and the Palestinian Terror. Routledge.Google Scholar
Sheskin, D. J. (2020). Handbook of parametric and nonparametric statistical procedures. Chapman & Hall.Google Scholar
Simon, S. (2018). Iran and President Trump: What is the endgame? Survival, 60(4), 7–20.Google Scholar
Slayton, R. (2017). What is the cyber offense-defense balance? Conceptions, causes, and assessment. International Security, 41(3), 72–109.CrossRefGoogle Scholar
Sniderman, P. M. (2018). Some advances in the design of survey experiments. Annual Review of Political Science, 21(1), 259–275.Google Scholar
Starr, H. (2000). Substitutability in foreign policy: Theoretically central, empirically elusive. Journal of Conflict Resolution, 44(1), 128–138.Google Scholar
Straub, J. (2019). Mutual assured destruction in information, influence and cyber warfare: Comparing, contrasting and combining relevant scenarios. Technology in Society, 59, 101177.Google Scholar
Tabatabai, A. M. (2020). After Soleimani: What’s next for Iran’s Quds force? CTC Sentinel, 13(1), 28–33.Google Scholar
Tesler, M. (2020, January 4). Attacking Iran will not help Trump win reelection. Here’s why. The Washington Post. www.washingtonpost.com/politics/2020/01/04/attacking-iran-wont-help-trump-win-reelection-heres-why/Google Scholar
Thompson, W., & Dreyer, D. (2011). Handbook of international rivalries. CQ Press.Google Scholar
Toft, M. D. (2014). Territory and war. Journal of Peace Research, 51(2), 185–198.Google Scholar
Trevithick, J. (2019, June 20). No easy decisions for U.S. over how to react to Iran shooting down navy drone. The Drive. www.thedrive.com/the-war-zone/28626/no-easy-decisions-for-u-s-over-how-to-react-to-iran-shooting-down-navy-droneGoogle Scholar
Trump, D. (2018, May 8). President Donald J. Trump is ending United States participation in an unacceptable Iran deal. White House. www.whitehouse.gov/briefings-statements/president-donald-j-trump-ending-united-states-participation-unacceptable-iran-deal/Google Scholar
Valeriano, B. (2013). Becoming rivals: The process of interstate rivalry development. Routledge.Google Scholar
Valeriano, B., & Jensen, B. (2019, January 15). The myth of the cyber offense: The case for cyber restraint. Cato Institute. www.cato.org/publications/policy-analysis/myth-cyber-offense-case-restraintGoogle Scholar
Valeriano, B., & Jensen, B. (2019, June 25). How cyber operations can help manage crisis escalation with Iran. The Washington Post. www.washingtonpost.com/politics/2019/06/25/how-cyber-operations-can-help-manage-crisis-escalation-with-iran/Google Scholar
Valeriano, B., Jensen, B. M., & Maness, R. C. (2018). Cyber strategy: The evolving character of power and coercion. Oxford University Press.Google Scholar
Valeriano, B., & Maness, R. C. (2014). The dynamics of cyber conflict between rival antagonists, 2001–11. Journal of Peace Research, 51(3), 347–360.Google Scholar
Valeriano, B., & Maness, R. C. (2015, May 13). The coming cyberspace: The normative argument against cyberwarfare. Foreign Affairs. www.foreignaffairs.com/articles/2015-05-13/coming-cyberpeaceGoogle Scholar
Valeriano, B., & Maness, R. C. (2015). Cyber war versus cyber realities: Cyber conflict in the international system. Oxford University Press.Google Scholar
Van Creveld, M. (2013). Wargames: From gladiators to gigabytes. Cambridge University Press.Google Scholar
Vavra, S. (2019, July 10). Why cyber command’s latest warning is a win for the government’s information sharing efforts. CyberScoop. www.cyberscoop.com/cyber-command-information-sharing-virustotal-iran-russia/Google Scholar
Vasquez, J. A. (1993). The war puzzle. Cambridge University Press.Google Scholar
Vasquez, J. A., & Henehan, M. T. (2010). Territory, war, and peace. Routledge.Google Scholar
Wise, H. (2013). Inside the danger zone: The US Military in the Persian Gulf, 1987–1988. Naval Institute Press.Google Scholar
Wong, E., & Schmitt, E. (2019, April 8). Trump designates Iran’s revolutionary guards a foreign terrorist group. New York Times. www.nytimes.com/2019/04/08/world/middleeast/trump-iran-revolutionary-guard-corps.htmlGoogle Scholar
Yee, V. (2019, May 13). Claim of attacks on 4 oil vessels raises tensions in the Middle East. New York Times. www.nytimes.com/2019/05/13/world/middleeast/saudi-arabia-oil-tanker-sabotage.htmlGoogle Scholar
Zaveri, M. (2020, February 10). More than 100 troops have brain injuries from Iran missile strike, Pentagon says. New York Times. www.nytimes.com/2020/02/10/world/middleeast/iraq-iran-brain-injuries.htmlGoogle Scholar
Zraick, K. (2020, January 3). What to know about the death of Iranian General Suleimani. New York Times. www.nytimes.com/2020/01/03/world/middleeast/suleimani-dead.htmlGoogle Scholar

References

Adeleke, R. (2020). Digital divide in Nigeria: The role of regional differentials. African Journal of Science, Technology, Innovation and Development, https://doi.org/10.1080/20421338.2020.1748335Google Scholar
Akhalbey, F. (2019). Julius Malema Blames Whites for ongoing xenophobia against African migrants in South Africa. Face2faceafrica. Retrieved from: face2faceafrica.com/article/julius-malema-blames-whites-for-ongoing-xenophobia-against-african-migrants-in-south-africa-video [Accessed December 20, 2020].Google Scholar
AlDajani, M. I. (2020). Internet communication technology (ICT) for reconciliation. Cham: Springer.Google Scholar
Alfredo Acosta, A., & Zia, M. (2020, June 12). Digital transitions in transitional justice. DeJusticia. Retrieved from: www.dejusticia.org/en/column/digital-transitions-in-transitional-justice/Google Scholar
Alexandra, S. (2018). Facebook admits it was used to incite violence in Myanmar. New York Times. Retrieved from: www.nytimes.com/2018/11/06/technology/myanmar-facebook.html [Accessed October 5, 2019].Google Scholar
Alliance for Peacebuilding. (2012). Peacebuilding 2.0: Mapping the boundaries of an expanding field. Washington, DC: United States Institute of Peace. Fall 2012.Google Scholar
Almutawa, A. (2020). Designing the organisational structure of the UN cyber peacekeeping team. Journal of Conflict & Security Law, 25(1), 117–147. https://doi.org/10.1093/jcsl/krz024Google Scholar
Barberá, P. (2020). Social media, echo chambers, and political polarization. In Persily, N., & Tucker, J. (Eds.), Social media and democracy: The state of the field, prospects for reform (pp. 34–55). Cambridge: Cambridge University Press.Google Scholar
BBC News Mundo. (2019). Álvaro Uribe Denuncia Que Su Cuenta De Twitter Fue “Bloqueada” Durante La Jornada Del Paro Nacional En Colombia. BBC Mundo. Retrieved from: www.bbc.com/mundo/noticias-america-latina-50511205 [Accessed September 21, 2020].Google Scholar
Belloni, R., & Moro, F. N. (2019). Stability and Stability Operations: Definitions, Drivers, Approaches. Ethnopolitics, 18(5), 445–461. https://doi.org/10.1080/17449057.2019.1640503Google Scholar
Berman, E., Felter, J. H., & Shapiro, J. N. (2020). Small Wars, Big Data: The Information Revolution in Modern Conflict. Princeton, NJ: Princeton University Press.Google Scholar
Borris, E. R. (2002). Reconciliation in post conflict peacebuilding: Lessons learned from South Africa? Second track/citizens’ diplomacy: concepts and techniques for conflict transformation. Lanham, MD and Oxford, 161–181.Google Scholar
Bosch, T. (2017). Twitter activism and youth in South Africa: The case of# RhodesMustFall. Information, Communication & Society, 20(2), 221–232.Google Scholar
Bradshaw, S., & Howard, P. N. (2019). The global disinformation order: 2019 global inventory of organised social media manipulation. Project on Computational Propaganda. Oxford Internet Institute. University of Oxford. Retrieved from: comprop.oii.ox.ac.uk/wp-content/uploads/sites/93/2019/09/CyberTroop-Report19.pdfGoogle Scholar
Brzoska, M., Ehrhart, H.-G., & Narten, J. (Eds.). (2011). Multi-stakeholder security partnerships: A critical assessment with case studies from Afghanistan, DR Congo and Kosovo. Baden-Baden: Nomos.Google Scholar
Calderaro, A., & Craig, A. J. (2020). Transnational governance of cybersecurity: Policy challenges and global inequalities in cyber capacity building. Third World Quarterly, 41(6), 917–938.Google Scholar
Carter, W. (2013). War, peace and stabilisation: Critically reconceptualising stability in Southern Afghanistan. Stability: International Journal of Security & Development, 2(1), 15–35.Google Scholar
Cederman, L. E., & Vogt, M. (2017). Dynamics and logics of civil war. Journal of Conflict Resolution, 61(9), 1992–2016.Google Scholar
Chenou, J. M., Chaparro-Martínez, L. P., & Mora Rubio, A. M. (2019). Broadening conceptualizations of transitional justice through using technology: ICTs in the context of justicia y Paz in Colombia. International Journal of Transitional Justice, 13(1), 92–104.Google Scholar
Chenou, J. M., & Cepeda-Másmela, C. (2019). # NiUnaMenos: Data Activism from the Global South. Television & New Media, 20(4), 396–411.Google Scholar
Chenou, J. M., & Radu, R. (2019). The “right to be forgotten”: Negotiating public and private ordering in the European Union. Business & Society, 58(1), 74–102.Google Scholar
Chiumbu, S. (2012). Exploring mobile phone practices in social movements in South Africa–the Western Cape Anti-Eviction Campaign. African Identities, 10(2), 193–206.Google Scholar
Choucri, N., & Clark, D. D. (2018). International relations in the cyber age: The co-evolution dilemma. Information Policy.Google Scholar
Cini, L. (2019). Disrupting the Neoliberal university in South Africa: The# FeesMustFall Movement in 2015. Current Sociology, 67(7), 942–959.Google Scholar
Collins, A. (2020). Critical human security and cyberspace: Enablement besides constraint. In Salminen, M., Zojer, G., & Hossain, K. (Eds.), Digitalisation and human security. A multi-disciplinary approach to cybersecurity in the European High North (pp. 83–109). Cham: Springer.Google Scholar
CONPES. (2016). Política nacional de seguridad digital. CONPES No. 3854. Bogotá, DC: Consejo Nacional de Política Económica y Social. Available at: colaboracion.dnp.gov.co/CDT/Conpes/Econ%C3%B3micos/3854.pdfGoogle Scholar
Comninos, A. (2013). The role of social media and user-generated content in post-conflict peacebuilding. Washington, DC: World Bank.Google Scholar
DANE. (2020). Indicadores básicos de TIC en Hogares. Bogotá, DC: Departamento Administrativo Nacional de Estadísticas. Available at: www.dane.gov.co/index.php/estadisticas-por-tema/tecnologia-e-innovacion/tecnologias-de-la-informacion-y-las-comunicaciones-tic/indicadores-basicos-de-tic-en-hogaresGoogle Scholar
Dejusticia, Fundación Karisma, & Privacy International. (2017). The right to privacy in Colombia stakeholder report universal periodic review 30th session – Colombia. Retrieved from:uprdoc.ohchr.org/uprweb/downloadfile.aspx?filename=5412&file=EnglishTranslationGoogle Scholar
Deibert, R. (2018). Trajectories for future cybersecurity research. In The Oxford handbook of international security. Oxford, UK: Oxford University Press.Google Scholar
van Dijk, J. (2020). The digital divide. Hoboken, NJ: John Wiley & Sons.Google Scholar
Dunn Cavelty, M. (2013). From cyber-bombs to political fallout: Threat representations with an impact in the cyber-security discourse. International Studies Review, 15(1), 105–122.Google Scholar
du Toit, F. (2017). A broken promise? Evaluating South Africa’s reconciliation process twenty years on. International Political Science Review, 38(2), 169–184. https://doi.org/10.1177/0192512115594412Google Scholar
Duncombe, C. (2019). The politics of Twitter: Emotions and the power of social media. International Political Sociology, 13(4), 409–429.Google Scholar
Franklin, M. I. (2019). Human rights futures for the internet. In Wagner, B., Kettemann, M., & Vieth, K. (Eds.), Research handbook on human rights and digital technology (pp. 5–23). Cheltenham, UK: Edward Elgar Publishing.Google Scholar
Geldenhuys, J., & Kelly-Louw, M. (2020). Demistifying hate speech under the PEPUDA. Potchefstroom Electronic Law Journal, 23, 1–50.Google Scholar
Global Partners Digital. (2013). Internet governance. Towards greater understanding of global south perspectives. May 2013 Report. London: Global Partners Digital.Google Scholar
Gohdes, A. R. (2018). Studying the internet and violent conflict. Conflict Management and Peace Science, 35(1), 89–106.Google Scholar
Hartzell, C., Hoddie, M., & Rothchild, D. (2001). Stabilizing the peace after civil war: An investigation of some key variables. International Organization, 55(1), 183–208.Google Scholar
Himelfarb, S., & Chabalowski, M. (2008). Media, conflict prevention and peacebuilding: Mapping the edges. Washington, DC: United States Institute of Peace.Google Scholar
Hochwald, T. (2013). How do social media affect intra-state conflicts other than war? Connections, 12(3), 9–38.Google Scholar
Hoddie, M., & Hartzell, C. (2005). Signals of reconciliation: Institution-building and the resolution of civil wars. International Studies Review, 7(1), 21–40.Google Scholar
Holmes, C. (2019). What’s behind South Africa’s xenophobic violence last week? The Washington Post. www.washingtonpost.com/politics/2019/09/09/whats-behind-south-africas-xenophobic-violence-last-week/Google Scholar
Howard, R. (2002). An operational framework for media and peacebuilding. Vancouver, BC: Institute for Media, Policy and Civil Society.Google Scholar
International Telecommunication Union. (2011). The quest for cyber peace. Geneva, January 2011. Retrieved from: handle.itu.int/11.1002/pub/803f9a60-enGoogle Scholar
International Telecommunication Union. (2020). Statistics. Available at: www.itu.int/en/ITU-D/Statistics/Pages/stat/default.aspxGoogle Scholar
Kaplan, A. M., & Haenlein, M. (2012). Social media: Back to the roots and back to the future. Journal of Systems and Information Technology, 14(2), 101–104.Google Scholar
Karlsrud, J. (2014). Peacekeeping 4.0: Harnessing the potential of big data, social media, and cyber technologies In Kremer, J. F., & Müller, B. (Eds.), Cyberspace and international relations. Theory prospects and challenges (pp. 141–160). Springer.Google Scholar
Kerttunen, M., & Tikk, E. (2020). The Politics of stability: Cement and change in cyber affairs. In Tikk, E., & Kerttunen, M. (Eds.), Routledge handbook of international cybersecurity. Oxford, UK: Routledge.Google Scholar
Krampe, F. (2016). Empowering peace: Service provision and state legitimacy in Nepal’s peace-building process. Conflict, Security & Development, 16(1), 53–73. https://doi.org/10.1080/14678802.2016.1136138Google Scholar
Kuehn, A. (2014). Extending cybersecurity, securing private internet infrastructure: The US Einstein program and its implications for internet governance. In Radu, R., Chenou, J. M., & Weber, R. (Eds.), The evolution of global internet governance. Principles and policies in the making (pp. 157–167). Springer.Google Scholar
Kuerbis, B., & Badiei, F. (2017). Mapping the cybersecurity institutional landscape. Digital Policy, Regulation and Governance, 19(6), 466–492. https://doi.org/10.1108/DPRG-05–2017-0024Google Scholar
Laouris, Y. (2004). Information technology in the service of peacebuilding: The case of cyprus. World Futures, 60(1–2), 67–79. https://doi.org/10.1080/725289197Google Scholar
Levitt, J. (2019, September 4). #SayNoToXenophobia calls for unity as looting and violence rock SA. TimesLIVE. Retrieved from: www.timeslive.co.za/news/south-africa/2019-09-04-saynotoxenophobia-calls-for-unity-as-looting-and-violence-rock-sa/Google Scholar
Lubin, A. (2020). The rights to privacy and data protection under international humanitarian law and human rights Law. Asaf Lubin, the rights to privacy and data protection under international humanitarian law and human rights law. In Kolb, R., Gaggioli, G., & Kilibarda, P. (Eds.), Research handbook on human rights and humanitarian law: Further reflections and perspectives. Edward Elgar(forthcoming).Google Scholar
Majcin, J. (2018). Social media challenges to peace-making and what can be done about them. Groningen Journal of International Law, 6(2), 242–255.Google Scholar
Margetts, H., John, P., Hale, S., & Yasseri, T. (2015). Political turbulence: How social media shape collective action. Princeton, NJ: Princeton University Press.Google Scholar
Maschmeyer, L., Deibert, R. J., & Lindsay, J. R. (2020). A tale of two cybers – how threat reporting by cybersecurity firms systematically underrepresents threats to civil society. Journal of Information Technology & Politics. Latest articles, 1–20. https://doi.org/10.1080/19331681.2020.1776658Google Scholar
Mason, T. D., & Mitchell, S. M. (Eds.). (2016). What do we know about civil wars? Lanham, MD: Rowman & Littlefield.Google Scholar
Mathew, B., Dutt, R., Goyal, P., & Mukherjee, A. (2019). Spread of hate speech in online social media. In Proceedings of the 10th ACM conference on web science (pp. 173–182).Google Scholar
Media Monitoring Africa, South African National Editors’ Forum, Interactive Advertising Bureau of South Africa, Association for Progressive Communications. (2019). Universal access to the internet and free public access in South Africa. A Seven-Point Implementation Plan. September, 2019. Available at: internetaccess.africa/wp-content/uploads/2019/10/UA-Report.pdfGoogle Scholar
Mielke, K., Mutschler, M., & Meininghaus, E. (2020). For a dynamic approach to stabilization. International Peacekeeping, 27(5), 810–835. https://doi.org/10.1080/13533312.2020.1733424Google Scholar
Miklian, J., & Schouten, P. (2019). Broadening ‘business’, widening ‘peace’: A new research agenda on business and peace-building. Conflict, Security & Development, 19(1), 1–13. https://doi.org/10.1080/14678802.2019.1561612Google Scholar
Mlonzi, Y. (2017). South Africa and internet governance: Are we just ticking a box? Global information society watch 2017: National and regional internet governance forum initiatives. Association for Progressive Communications.Google Scholar
Mueller, M. (2017). Is cybersecurity eating Internet governance? Causes and consequences of alternative framings. Digital Policy, Regulation and Governance, 19(6), 415–428. https://doi.org/10.1108/DPRG-05-2017-0025Google Scholar
Mullenbach, M. J. (2005). Deciding to keep peace: An analysis of international influences on the establishment of third-party peacekeeping missions. International Studies Quarterly, 49(3), 539–540.Google Scholar
Narten, J. (2011). Multi-stakeholder security partnerships: Characteristics, processes, dilemmas and impacts. In Brzoska, M., Ehrhart, H.-G., & Narten, J. (Eds.), Multi-stakeholder security partnerships (pp. 15–37). Baden-Baden: Nomos.Google Scholar
Nkanjeni, U. (2019). Twitter rules Malema’s ‘only trust a dead white man’ Mugabe tribute not violent, despite outrage. Sunday Times. Retrieved from: www.timeslive.co.za/news/south-africa/2019-09-17-twitter-rules-malemas-only-trust-a-dead-white-man-mugabe-tribute-not-violent-despite-outrage/Google Scholar
Nigam, A., Dambanemuya, H. K., Joshi, M., & Chawla, N. V. (2017). Harvesting social signals to inform peace processes implementation and monitoring. Big Data, 5(4), 337–355.Google Scholar
Nilsson, D., & Söderberg Kovacs, M. (2011). Revisiting an elusive concept: A review of the debate on spoilers in peace processes. International Studies Review, 13(4), 606–626.Google Scholar
Njeru, S. (2009). Information and communication technology (ICT), gender, and peacebuilding in Africa: A case of missed connections. Peace and Conflict Review, 3(2), 32–40.Google Scholar
Odendaal, A. (2010). An architecture for building peace at the local level: A comparative study of local peace committees. New York: UNDP.Google Scholar
Onuoha, F. (2013). Boko Haram: Anatomy of a crisis, 1–91. Bristol, UK: e-international relations press. Retrieved from: https://reliefweb.int/report/nigeria/boko-haram-anatomy-crisisGoogle Scholar
Paffenholz, T. (Ed.). (2010). Civil society & peacebuilding: A critical assessment. Boulder, CO: Lynne Rienner.Google Scholar
Parlevliet, M. (2017). Human rights and peacebuilding: Complementary and contradictory, complex and contingent. Journal of Human Rights Practice, 9(3), 333–357. https://doi.org/10.1093/jhuman/hux032Google Scholar
Pernice, I. (2018). Global cybersecurity governance: A constitutionalist analysis. Global Constitutionalism, 7(1), 112–141.Google Scholar
Pettersson, T., & Öberg, M. (2020) Organized violence, 1989–2019. Journal of Peace Research, 57(4), 597–613.Google Scholar
Petykó, M. (2018). Troll. In Warf, B. (Ed.) The SAGE encyclopedia of the internet (pp. 880–882). Thousand Oaks, CA: SAGE Publications.Google Scholar
Preventive Action Working Group. (2015). Multi-stakeholder processes for conflict prevention and peacebuilding: A manual. The Hague: Global Partnership for the Prevention of Armed Conflict.Google Scholar
Puig Larrauri, H., & Kahl, A. (2013). Technology for peacebuilding. Stability: International Journal of Security & Development, 2(3), 1–15. https://doi.org/10.5334/sta.cvGoogle Scholar
Puig, H. (2019). Social networks: Fuel to conflict and tool for transformation. Peace in progress, 36. Barcelona: ICIP. www.icipperlapau.cat/numero36/articles_centrals/article_central_7Google Scholar
Quintero, R., & Solano, Y. (2020). Estudiar en línea en Colombia es un privilegio. El Tiempo, June 30, 2020. Available at: www.eltiempo.com/datos/asi-es-la-conexion-a-internet-en-colombia-510592Google Scholar
Ramsbotham, O., Miall, H., & Woodhouse, T. (2016). Contemporary conflict resolution, 4th ed., Cambridge, UK: Polity.Google Scholar
República de Colombia. (2016). Acuerdo Final para la terminación del conflicto y la construcción de la Paz Estable y Duradera en Colombia. November 24, 2016.Google Scholar
Rettberg, A. (2007). The private sector and peace in El Salvador, Guatemala, and Colombia. Journal of Latin American Studies, 39(3), 463–494.Google Scholar
Rettberg, A. (2016). Need, creed, and greed: Understanding why business leaders focus on issues of peace. Business Horizons, 59(5), 481–492.Google Scholar
Rodríguez-Gómez, D., Foulds, K., & Sayed, Y. (2016). Representations of violence in social science textbooks: Rethinking opportunities for peacebuilding in the Colombian and South African post-conflict scenarios. Education as Change, 20(3), 76–97.Google Scholar
Robinson, M., Jones, K., Janicke, H., & Maglaras, L. (2019). Developing cyber peacekeeping: Observation, monitoring and reporting. Government Information Quarterly, 36(2), 276–293.Google Scholar
Salem, S. (2014). The 2011 Egyptian uprising. Framing events through the narratives of protesters. Revolution as a process. The case of the Egyptian uprising. Bremen, Germany: Wiener Verlag für Sozialforschung, 21–47.Google Scholar
Sarkees, M. R., & Wayman, F. W. (2010). Resort to war: A data guide to inter-state, extra-state, intra-state, and non-state wars, 1816–2007. Washington, DC: SAGE Publications.Google Scholar
Sarkin, J. (1998). The development of a human rights culture in South Africa. Human Rights Quarterly, 20(3), 628–65.Google Scholar
Schia, N. N. (2018). The cyber frontier and digital pitfalls in the Global South. Third World Quarterly, 39(5), 821–837.Google Scholar
Scholte, J. A. (2020). Multistakeholderism: Filling the global governance gap? Global challenges foundation, April 2020. Retrieved from: globalchallenges.org/wp-content/uploads/Research-review-global-multistakeholderism-scholte-2020.04.06.pdfGoogle Scholar
Shackelford, S. J. (2019). Should Cybersecurity be a human right: Exploring the shared responsibility of cyber peace. Stanford Journal of International Law, 55(1), 155.Google Scholar
Shackelford, S. J. (2020). Inside the global drive for cyber peace (April 15, 2020). Retrieved from SSRN:ssrn.com/abstract=3577161orhttps://doi.org/10.2139/ssrn.3577161Google Scholar
Shandler, R., Gross, M. L., & Canetti, D. (2019). Can you engage in political activity without internet access? The social effects of internet deprivation. Political Studies Review, https://doi.org/10.1177/1478929919877600Google Scholar
Shires, J. (2018). Enacting expertise: Ritual and risk in cybersecurity. Politics and Governance, 6(2), 31–40.Google Scholar
South African Human Rights Commission. (2012). SAHRC statement on latest developments regarding the protection of state information bill. www.sahrc.org.za/index.php/sahrc-media/news-2/item/151-sahrc-statement-on-latest-developments-regarding-the-protection-of-state-information-billGoogle Scholar
South African Human Rights Commission. (2016). Human rights advocacy and communications report 2015–2016. www.sahrc.org.za/home/21/files/29567%20A4%20adv%20report%20FINAL%20FOR%20PRINT.pdfGoogle Scholar
South African Human Rights Commission. (2017). Submission on the cybercrimes and cybersecurity bill [B6-2017]. www.sahrc.org.za/home/21/files/SAHRC%20Submission%20on%20Cybercrimes%20and%20Cybersecurity%20Bill-%20Aug%202017.pdfGoogle Scholar
South African Human Rights Commission. (2019). Stakeholder dialogue on racism and social media in South Africa. www.sahrc.org.za/home/21/files/Racism%20and%20Social%20Media%20Report.pdfGoogle Scholar
Spillane, J. (2015). ict4p: Using information and communication technology for peacebuilding in Rwanda. Journal of Peacebuilding & Development, 10(3), 97–103.Google Scholar
State Security Agency. (2015). The national cybersecurity policy framework. SA Government Gazette No. 39475. December 4, 2015.Google Scholar
STATSSA. (2020). General household survey 2018. Pretoria: Statistics South Africa. Available at: www.statssa.gov.za/publications/P0318/P03182018.pdfGoogle Scholar
Stauffacher, D., Weekes, B.,Gasser, U., Maclay, C., & Best, M. (Eds.). (2011). Peacebuilding in the information age. Shifting hype from reality. Geneva: ICT4Peace Foundation.Google Scholar
Stedman, S. J. (1997). Spoiler Problems in Peace Processes. International Security, 22(2), 5–53.Google Scholar
Tanczer, L. M., Brass, I., & Carr, M. (2018). CSIRT s and global cybersecurity: How technical experts support science diplomacy. Global Policy, 9(3), 60–66.Google Scholar
Tellidis, I., & Kappler, S. (2016). Information and communication technologies in peacebuilding: Implications, opportunities and challenges. Cooperation and Conflict, 51(1), 75–93. https://doi.org/10.1177/0010836715603752Google Scholar
Tewathia, N., Kamath, A., & Ilavarasan, P. V. (2020). Social inequalities, fundamental inequities, and recurring of the digital divide: Insights from India. Technology in Society, 61(1), 1–11.Google Scholar
Tully, S. (2014). A human right to access the Internet? Problems and prospects. Human Rights Law Review, 14(2), 175–195.Google Scholar
United Nations. (2005). Tunis commitment on the information society. WSIS-05/TUNIS/DOC/7-E. 18 November 2005. Retrieved from: www.itu.int/net/wsis/docs2/tunis/off/7.htmlGoogle Scholar
Valeriano, B., & Maness, R. C. (2018). International relations theory and cyber security. The Oxford Handbook of International Political Theory, 259.Google Scholar
Verdad Abierta. (2020, September 25). Quienes somos. Retrieved from: verdadabierta.com/quienes-somos/Google Scholar
Vyas, K. (2020). Colombian intelligence unit used U.S. equipment to spy on politicians, journalists. Retrieved from: www.wsj.com/articles/colombian-intelligence-unit-used-u-s-equipment-to-spy-on-politicians-journalists-11588635893Google Scholar
Wallensteen, P. (2018). Understanding conflict resolution. Washington, DC: SAGE Publications.Google Scholar
Walter, B. F. (2017). The new civil wars. Annual Review of Political Science, 20, 469–486.Google Scholar
Weimann, G. (2016). The emerging role of social media in the recruitment of foreign fighters. In de Guttry, A., Capone, F., & Paulussen, C. (Eds.), Foreign fighters under international law and beyond (pp. 77–95). The Hague: TMC Asser Press.Google Scholar
Wilson, J., & Wilson, H. (2009). Digital divide: Impediment to ICT and peace building in developing countries. American Communication Journal, 11(2), 1–9.Google Scholar
Young, O., & Young, E. (2016). Technology for peacebuilding in divided societies: ICTs and peacebuilding in Northern Ireland. TRANSCOM (Transformative Connections).Google Scholar
Zaum, D. (2012). Beyond the “liberal peace”. Global Governance: A Review of Multilateralism and International Organization, 18(1), 121–132.Google Scholar
Zeitzoff, T. (2017). How social media is changing conflict. Journal of Conflict Resolution, 61(9), 1970–1991.Google Scholar

References

Andress, J., & Winterfeld, S. (2011). Cyber warfare. Elsevier.Google Scholar
Burton, J., & Soare, S. (2019). Understanding the strategic implications of the weaponization of artificial intelligence [Manuscript]. 11th international conference on cyber conflict. Tallinn, Estonia. https://ccdcoe.org/uploads/2019/06/Art_14_Understanding-the-Strategic-Implications.pdfGoogle Scholar
Cave, S., & ÓhÉigeartaigh, S. (2018, December). An AI race for strategic advantage: Rhetoric and risks. AIES ‘18: Proceedings of the 2018 AAAI/ACM conference on AI, ethics, and society. New Orleans, LA, USA. https://doi.org/10.1145/3278721.3278780Google Scholar
Chesney, B. (2020, March 2). Cybersecurity law, policy, and institutions, v.3.0.Google Scholar
Congressional Research Service. (2020, August 26). Artificial intelligence and national security.Google Scholar
Congressional Research Service. (2009, March 17). Information operations, cyberwarfare, and cybersecurity: Capabilities and related policy issues.Google Scholar
Coglianese, C., & Lehr, D. (2018, November 9). Transparency and Algorithmic Governance. Administrative Law Review, 71(1), 1–56.Google Scholar
Craig, A. & Valeriano, B. (2016). Conceptualising cyber arms races [Manuscript]. 8th International conference on cyber conflict. Tallinn, Estonia. https://ccdcoe.org/uploads/2018/10/Art-10-Conceptualising-Cyber-Arms-Races.pdfGoogle Scholar
Daly, A. (2019, June 5). Artificial intelligence governance and ethics: Global perspectives. https://arxiv.org/ftp/arxiv/papers/1907/1907.03848.pdfGoogle Scholar
Dixon, W., & Eagan, N. (2019, June 19). 3 Ways AI will change the nature of cyber attacks. World Economic Forum. www.weforum.org/agenda/2019/06/ai-is-powering-a-new-generation-of-cyberattack-its-also-our-best-defence/Google Scholar
Eldred, C. (2019, October). AI and domain knowledge: Implications of the limits of statistical inference. Berkeley Roundtable on International Economics. https://brie.berkeley.edu/sites/default/files/ai_essay_final_10.15.19.pdfGoogle Scholar
Firth-Butterfield, K., & Chae, Y. (2018, April). Artificial intelligence collides with patent law. World Economic Forum. www3.weforum.org/docs/WEF_48540_WP_End_of_Innovation_Protecting_Patent_Law.pdfGoogle Scholar
Geers, K. (2011, January 1). Strategic cyber security. NATO Cooperative Cyber Defence Centre for Excellence.Google Scholar
Geist, E. M. (2016, August 15). It’s already too late to stop the AI arms race—we must manage it instead. Bulletin of the Atomic Scientists, 72(5), 318–321. https://doi.org/10.1080/00963402.2016.1216672Google Scholar
Gisel, L., & Olejnik, L. (2008, November 14–16). The potential human cost of cyber operations [Manuscript]. ICRC Expert Meeting. Geneva, Switzerland. www.icrc.org/en/document/potential-human-cost-cyber-operationsGoogle Scholar
Haney, B. S. (2020). Applied artificial intelligence in modern warfare & national security policy. Hastings Science and Technology Journal, 11(1), 61–100.Google Scholar
Ivey, M. (2020). The ethical midfield in artificial intelligence: Practical reflections for national security lawyers. The Georgetown Journal of Legal Ethics, 33(109), 109–138. www.law.georgetown.edu/legal-ethics-journal/wp-content/uploads/sites/24/2020/01/GT-GJLE190067.pdfGoogle Scholar
Jensen, E. T. (2009). Cyber warfare and precautions against the effects of attacks. Texas Law Review, 88(1533), 1534–1569.Google Scholar
Kosseff, J. (2019). The countours of ‘Defend Forward’ under international law, 2019 11th International Conference on Cyber Conflict (CyCon) 900, 1–13.Google Scholar
Ledner, F., Werner, T., & Martini, P. (2009). Proactive botnet countermeasures – An offensive approach. In Czosseck, C. & Geers, K. (Eds.), The virtual battlefield: Perspectives on cyber warfare (pp. 211–225). 10.3233/978-1-60750-060-5-211Google Scholar
Lefkowitz, M. (2019, September 25). Professor’s perceptron paved the way for AI – 60 years too soon. Cornell Chronicle. https://news.cornell.edu/stories/2019/09/professors-perceptron-paved-way-ai-60-years-too-soonGoogle Scholar
Lemley, M. A., & Case, B. (2020). You might be a robot. Cornell Law Review, 105(287), 287–362.Google Scholar
Libicki, M. C. (2009). Cyberdeterrence and cyberwar. RAND Corporation.Google Scholar
National Science & Technology Council. (2020, March). Networking & Information Technology Research and Development Subcomittee and the Machine Learning & Artificial Intelligence Subcommittee. Artificial Intelligence and Cybersecurity: Opportunities and Challenges, Technical Workshop Summary Report.Google Scholar
Newman, J. C. (2019, February). Towards AI security: Global aspirations for a more resilient future. Center for Long-Term Cybersecurity.Google Scholar
O’Hanlon, M. E. (2018, November 29). The role of AI in future warfare. Brookings. www.brookings.edu/research/ai-and-future-warfare/Google Scholar
Padrón, J. M., & Ojeda-Castro, Á. (2017, June). Cyberwarfare: Artificial intelligence in the frontlines of combat. International Journal of Information Research and Review, 4(6), 4208–4212.Google Scholar
Payne, K. (2018). Artificial intelligence: A revolution in strategic affairs? International Institute for Strategic Studies.Google Scholar
Priyadarshini, I., & Cotton, C. (2020, May 6). Intelligence in cyberspace: The road to cyber singularity. Journal of Experimental & Theoretical Artificial Intelligence. https://doi.org/10.1080/0952813X.2020.1784296Google Scholar
Reuter, C. (2020). Information technology for peace and security. Springer.Google Scholar
Roff, H. M. (2016, March). Cyber peace, new America. Cybersecurity Initiative.Google Scholar
Roff, H. M. (2017, August 1–3). Cybersecurity, artificial intelligence, and nuclear modernization [Workshop]. Cyberwarfare and Artificial Intelligence. University of Iceland, Reykjavik, Iceland.Google Scholar
Shackelford, S. J. (2014, April 16). Governing the final frontier: A polycentric approach to managing space weaponization and debris. American Business Law Journal, 51(2), 429–513.Google Scholar
Shackelford, S. J. & Dockery, R. (2019, October 30). Governing AI. Cornell Journal of Law and Policy. Advanced online publication.Google Scholar
Stevens, T. (2020, March 31). Knowledge in the grey zone: AI and cybersecurity. Journal of Digital War. https://doi.org/10.1057/s42984-020-00007wGoogle Scholar
Tabansky, L. (2011, May). Basic concepts in cyber warfare. Military and Strategic Affairs, 3(1), 75–92.Google Scholar
Taddeo, M., & Floridi, L. (2018, April 16). Regulate artificial intelligence to avert cyber arms race. Nature, 556(7701), 296–298. https://doi.org/10.1038/d41586-018-04602-6Google Scholar
Voo, J., Hemani, I., Jones, S., DeSombre, W., Cassidy, D., & Schwarzenbach, A. (2020, September). National Cyber Power Index 2020. Belfer Center for Science and International Affairs, Cambridge, Tech. Rep. September 2020 [Online]. Available: www.belfercenter.org/sites/default/files/2020-09/NCPI_2020.pdfGoogle Scholar
Figure 0

Figure 3.1 Traffic Light Protocol (TLP) definitions and usage, CISA [no date].

Figure 1

Figure 4.1 Diagram from Wargame Simulation.

Figure 2

Table 4.1 Treatment groups

Figure 3

Table 4.2 Contingency results by treatment

Figure 4

Figure 4.2 Response preferences from wargame simulation.

Figure 5

Table 4.3 Expected count of escalation events

Figure 6

Table 4.4 Treatment groups and instrument of power response preferences

Figure 7

Table 4.5 Conventional versus cyber escalation

Figure 8

Table 4.6 Coercive potential

Figure 9

Table 4.7 Coercive potential and cyber substitution

Figure 10

Figure 4.3 Iran–United States Case Timeline.

(Source) [no date]
Figure 11

Figure 5.1 The contributions of the four pillars of cyber peace to cyber peacebuilding.

(source: elaborated by the authors [September 21, 2020])

Save book to Kindle

To save this book to your Kindle, first ensure coreplatform@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×