The making of legal subjects has long been a crucial terrain for critical theory, also in relation to international law, where both emancipatory promises and expressions of power or discipline are tied to how subjects are recognized and enacted. International law's modes of subject-making have therefore been an important site of aspiration, struggle, and critique. While some have celebrated the rise of the individual on the stage of international law,Footnote 1 the liberal ideal of legal and political subjectivity lingering in these celebratory accounts has been confronted by different strands of feminist, post-colonial, and Marxist critique. With proliferating use of digital technologies in practices of (global) governance, the making of legal subjects has taken novel forms. Big data manufacture subjects in ways that spark new legal anxieties and destabilize or problematize established patterns of critical engagement. In data-driven practices that we will describe, subjects are no longer exclusively enacted as abstract autonomous entities or classified along stable criteria (of difference or enmity). Sustained by tools of pattern recognition and technologies for the “unsupervised uncovering of correlations,”Footnote 2 nascent forms of global governance by data produce subjects as transient clusters of attributes and data points within transient clusters of attributes and data points—bundles of vectors within vectors, only tentatively and temporarily tied together.Footnote 3 In this essay, we map out how this mode of subject-making has become prevalent in different domains of international legal practice. We trace these dynamics to changes in the exercise of state sovereignty and the technoscopic regimes—assemblages for information flow, processing, retention, and surveillance—that states rely on.
In different spheres of global security governance, the desire to act proactively and pre-emptively entails attentiveness to emergent patterns detected in data. As a result, the subject of security interventions—the “high risk” entity, a still-emerging category developing with the technologies assembled to define it—is increasingly defined by its placement in such emergent patterns and the correlation of characteristics reflected in them. This particular modality of subject-making, we argue, produces new dynamics of difference in international law,Footnote 4 evades existing legal safeguards, and troubles the exercise of collective agency. Further, we take up this symposium's invitation to reflect on the prospects of critiquing these novel forms of governance by data, or these new means of constructing subjects by and for governmental intervention. We observe that regulatory and normative responses in international law tend to revolve around the (re)assertion of human autonomy and rationality, the demand for democratic representation and inclusion, and the concern that algorithmic biases threaten the promise of formal legal equality. These regulatory tropes reinvigorate a liberal ideal of legal and political subjectivity that has long been the target of critical approaches in international law and beyond. We find these liberal responses to algorithmic governance inadequate. At the same time, established critical alternatives to liberal interventions are on unsure footing. They increasingly seem unable to capture and counter contemporary expressions of sovereign power, or worse, are re-signified and appropriated to serve new rationalities of rule. Reflecting on these dilemmas, we close by pointing to possible productive directions for a critical international law and technology agenda.
Technoscopic Regimes and Algorithmic Subjects
We see changes in the techniques of subject-making across different domains of global governance and international legal practice. In the sphere of global security, technologies of data collection and algorithmic analysis are deployed to detect and preempt security threats through identification of “high risk” profiles and patterns. In the context of counterterrorism, algorithmic border control, or anticipatory warfare, digital security technologies and practices pursue the objective of detecting future threats before they materialize. In each of these domains, action may be taken—in the form of administrative or criminal sanctions, retention at the border, travel interdictions, and lethal strikes—against individuals who are neither perpetrating, nor even planning or preparing, violent or threatening acts. In doing so, we argue, these practices produce new modes of subjectivity and turn individuals into transient clusters of attributes and data points.
At the transnational level, digital technologies create a special path in the criminal justice system for “unknown terrorists.” The algorithmic trajectory of the individual, already morphing into a conglomerate of potentialities, starts at the stage of watchlisting, prior to the commission of violent acts.Footnote 5 After the algorithmic identification of dangerousness at the stage of watchlisting, the trajectory continues in criminal trials, where intelligence is turned into evidence. In several EU member states, judges pronounce prison sentences against individuals based on intercepted digital communication, biometric data, and predictive analytics taken to reveal (sometimes very indirect) ties with terrorist groups. The algorithmic journey continues in detention, where the evolution of dangerousness is monitored with the chief objective of organizing post-detention surveillance.
In the counterterrorism context, new data collection tools have turned warfare into a permanent surveillance apparatus.Footnote 6 Instead of selecting targets based on the fact that they have been witnessed as participating in hostilities, states active in so-called wars on terror make decisions to kill on the basis of “suspicious patterns of behavior” and aggregates of elements indicating dangerousness.Footnote 7 The logic of target selection at play here is determined by patterns, propensities, and anomalies detected in data. The idea that correlations can reveal suspicious patterns of behavior prompts a recourse to algorithmic systems that will, in turn, exacerbate the production of evanescent and datafied subjectivities by surveillance apparatus on the battlefield.Footnote 8
We similarly observe the introduction of big data analytics and artificial intelligence (AI) in the field of border control. While the 2025 UK Border Strategy sets out to “develop advanced new risking systems” based on “AI-driven decision making,”Footnote 9 the 2020 EU strategy on the “use of AI in border control, migration and security” identified nine areas of opportunity for AI in this domain. Displaying the pre-emptive logic of contemporary security practices, the objective of these AI systems is not just to “verify[] and identify[] known persons,” but “to identify unknown persons of interest based on specific data-based risk profiles.”Footnote 10 In this sense, AI would enable processes of “[r]isk assessment performed on a group of individuals with the general aim to find patterns and cluster individuals for further investigation.”Footnote 11 This orientation toward “patterns” and “clusters” is elaborated in the strategy to clarify that the “classification categories” dividing people at the border “could be defined based on a risk threshold or specific indicators” or could be “less pre-defined where applications are grouped based on some ‘learned’ similarity.”Footnote 12 In the latter case, unsupervised AI models would be employed to “partition data into clusters.”Footnote 13 Tools of this kind are envisaged to be part of the European Travel Information and Authorisation System where AI will be deployed to optimize the performance of risk profiles. This aligns with a broader strategic aim to “identify patterns which were not observed as ‘strange’ before”—a distillation of meaningful attributes and features through the “unsupervised uncovering of correlations.”Footnote 14 It is the “uncovered correlation”—the “pattern” and “cluster” of inferred attributes—that divides and disciplines people at the digital border.Footnote 15
Across these different domains of global security governance, we observe how new compositions of sensory power produce new subject categories.Footnote 16 The subject we encounter here is neither an individual defined according to predetermined criteria of dangerousness or enmity, nor part of a delineated population stratified along fixed signs of difference, but a transient part of ephemeral clusters constantly subject to recombination (for economic and security purposes). Our observations about the changing nature of the subject are connected to a wider reflection on the exercise of state sovereignty in networked environments. If the modern subject is or was a product of the sovereign state, defined by control over territory and population, the new subject is a function of changes in state sovereignty and its reliance on novel technoscopic regimes. The concept of technoscopic regime is useful to study these changes in governmentality and the specific ways of enacting subjects through computational technologies. Etymologically, we define the concept of technoscopic with reference to its Greek roots in τεχνο and σκοπός, the former encompassing not just technology, but also art, craft and skill, the latter encompassing not just the watcher, but also the objective and target. The concept of technoscopic regime is an invitation to trace the techniques and the art of watching and targeting. With the technoscopic regimes of big data collection and algorithmic association, we argue, practices of subject-making and social sorting have taken novel forms. These changing practices, which mediate the violent, punitive, and exclusionary expressions of sovereign power, demand our critical attention.
Liberal Anxieties and the Prospect of CRILT
Taking up the invitation of CRILT to reflect on the project of “critical” international law in relation to these socio-technical developments, we consider several elements that a critical program should be attentive to and that signal possible pathways for its direction. First, despite the presumed novelty of algorithmic governance, it remains crucial to account for how new technological interventions are grafted onto and amplify historical conditions of racial capitalism and the biopolitical regimes of its enactment. Today's global economy and big data technologies have been built on the violently differential treatment of bodies, as evidenced in patterns both of inequitable global economic distribution and observed algorithmic violence against women and people of color.Footnote 17 We situate the technological tools of Fortress Europe as part of a still-colonial historical moment, in which racialized territorial security is an ongoing trope of political-economic activity.Footnote 18 In this way, the changing conditions of digital sovereignty and the subjects it enacts play out against the historically consistent backdrop of racialized economic and security routines.
Second, however, to situate digital sovereignty and contemporary subjectivities against consistent historical routines is not to eliminate the changes that they manifest. Critical engagement with contemporary security routines should consist of finding ways to scrutinize, make visible, and contest how the composition of “clusters” described above enacts and recomposes legal subjects through reiterative pattern formation and the operations it engenders. These operations are inherently distributive: they are aimed at rating and ranking, scoring, and sorting subjects along the speculative lines of future risk. This produces specific dynamics of difference in international law, along with new techniques for disciplining and normalizing the “aberrant.” These technologies can be understood as socio-technical processes of racialization: the “floating signifier of race”—this “master code” of difference—is re-signified with the “unsupervised uncovering of correlations” and the algorithmic inference of patterns and attributes not observed as “strange” before.Footnote 19 These practices of racialization, altered and amplified by the manufacturing of subjects as fluid correlations of attributes, hinders the prospect of collective political action. The algorithmic mode of subject-making relies on the power to define groups, and in the process denies that power to the subjects that it clusters and re-clusters according to its own contingent, intangible calculus.
The technological changes and problems we are describing have not gone unnoticed. International legal scholars have developed repertoires to conceptualize and counteract new forms of data-driven governance. While these interventions have merit, we observe a pattern of liberal anxiety in regulatory responses that consistently revolve around the (re)assertion of formal legal equality, human autonomy and rationality (to preserve “the freedom to initiate and develop one's own version of the good life”),Footnote 20 or the demand for democratic representation and its “turn to . . . publicness.”Footnote 21 This leads to a reinvigoration, in short, of liberal ideals of legal subjectivity that have long been the target of critical work, salient strands of which have especially been developed in the tradition of critical Black studies. As Sylvia Wynter argues, the emergence of the sovereign and autonomous human “subject” was possible “only on the basis of the dynamics of a colonizer/colonized relation that the West was to discursively constitute and empirically institutionalize”—a dynamic that we now consider to be reconfigured and emboldened.Footnote 22 Fred Moten, in a similar vein, observes that the “public sphere”—a sphere where “putatively individual subjects act and speak in public in so-called collectivities or coalitions”—not only “exclude[s] [Black folks] from modalities of citizenship, personhood, subjecthood,” but is “predicated on that exclusion, which is to say: predicated on the regulation and exclusion of that insurgency which ‘blackness’ instantiates”—an insurgency that “manifests itself as the refusal of the regulative force that has to be exerted in order for subjects to come into their own as subjects.”Footnote 23 We therefore consider reinvigorated ideals of liberal subjectivity to be ill-suited in curtailing technoscopic regimes, especially for those historically made vulnerable. These tropes are even part of the problem when framing the profound issue of algorithmic inequality in terms of statistical bias, data quality, or privacy concerns. To frame algorithmic governance as a privacy problem for instance maintains and cultivates the idea that accessible, capturable information is privileged knowledge and reveals individuals.Footnote 24
Yet, while the liberal ideal of legal subjectivity is troubling, the critical response traditionally leveled against it is also on unsure footing. One of our central claims is that the subject of global governance by data is a fluid and emergent correlation of attributes. Although unfixing the subject was long the prize sought by critical resistance to the hegemonic powers of the modern state, the prize arrived in a troubling form. While the nostalgic return to the liberal subject strikes us as unsatisfactory, the critical repertoire that took aim at that subject is also unmoored in the new ecology of digital sovereigns and cluster-subjects. In this light, past practices of contestation might be inadequate to address the contemporary developments raised here. One opening for critical work that strikes us as particularly productive is situated in the tradition of critical Black studies,Footnote 25 which has a long experience with thinking against the regulatory force of liberal legal subjectivity.Footnote 26 This provides a pathway for CRILT to create counter-narratives that oppose regimes which reduce, fix, essentialize individuals and communities and that, instead, make visible and insist on the thick, dense, unreadable character of subjects and social life. In the right to opacity and the right of refusal to representation,Footnote 27 we find traces of a subjectivity that “does not want to be correct or corrected,” that does not want to be fixed nor clustered according to a contingent and constantly changing calculus—a “political subjectivity” that, in the words of Ramon Amaro and Murad Khan, “does not organize at the threshold of existing perceptions of [racial] difference, but instead releases the energy from this interaction to form a potentially new individual and collective being.”Footnote 28