Published online by Cambridge University Press: 01 July 2021
The use of artificial intelligence to remove so-called terrorist content from online platforms, is the object of this chapter. As a first step, the analysis identifies the complex and tangled relationship between different levels in regulating the issue. In doing so, the author notes that the United Nations’ role seem to be particularly unclear and major international initiatives – as the Global Internet Forum to Counter Terrorism – aimed at managing the challenge of terrorist content online are characterized by a lack of democratic architecture and even of transparency as regards its governance and activities. As a consequence, the removal of “dangerous” content is often left to voluntary, informal and unregulated partnerships between the public and the private sector. As a second point, the analysis remarks that regional organizations – first of all, the European Union – are taking steps to try to regulate these partnerships, but the outcome is characterized by several drawbacks, both in terms of human rights protection and institutional issues. Ultimately, the chapter assesses these challenges through the lenses of constitutional law and argues in favor of a more pronounced role of international and regional bodies, which avert excessive informality and privatization in such a sensitive field.