Our systems are now restored following recent technical disruption, and we’re working hard to catch up on publishing. We apologise for the inconvenience caused. Find out more: https://www.cambridge.org/universitypress/about-us/news-and-blogs/cambridge-university-press-publishing-update-following-technical-disruption
We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
This journal utilises an Online Peer Review Service (OPRS) for submissions. By clicking "Continue" you will be taken to our partner site
https://mc.manuscriptcentral.com/ajil.
Please be aware that your Cambridge account is not valid for this OPRS and registration is required. We strongly advise you to read all "Author instructions" in the "Journal information" area prior to submitting.
To save this undefined to your undefined account, please select one or more formats and confirm that you agree to abide by our usage policies. If this is the first time you used this feature, you will be asked to authorise Cambridge Core to connect with your undefined account.
Find out more about saving content to .
To save this article to your Kindle, first ensure coreplatform@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Enthusiasm abounds about the potential of artificial intelligence to automate public decision-making. The rise of machine learning and computational text analysis together with the proliferation of digital platforms has raised the prospect of “robo-judging” and “robo-administrators.” From a human rights perspective, the reaction has been mixed, and on balance negative. Optimists herald the possibilities of democratizing legal services and making decision-making more predictable and efficient. Critics warn, however, of the specter of new forms of social control, arbitrariness, and inequality. This essay examines the concerns over the turn to automation from the perspective of two international human rights: the rights to social security and a fair trial. It argues that while the critiques deserve a full hearing, they should be evidence-based, informed by an understanding of “technological systems,” and cognizant of the trade-offs between human and machine failure.
AI-based military applications present both opportunities and challenges for multinational military cooperation. This contribution takes stock of the state of discussions around AI-based military applications within the North Atlantic Treaty Organization (NATO). While there have been a number of recent developments in national AI strategies and policies, discussions at the NATO level are still in early phases, and there is no agreed NATO policy in this area. Further multilateral work is needed if like-minded states such as NATO Allies and partners are to head off the serious risk that disagreements about these technologies might hamper effective multilateral military cooperation.
Every road vehicle must have a driver able to control it while in motion. These requirements, explicit in two important conventions on road traffic, have an uncertain relationship to the automated motor vehicles that are currently under development—often colloquially called “self-driving” or “driverless.” The immediate legal and policy questions are straightforward: Are these requirements consistent with automated driving and, if not, how should the inconsistency be resolved? More subtle questions go directly to international law's role in a world that artificial intelligence is helping to rapidly change: In a showdown between a promising new technology and an entrenched treaty regime, which prevails? Should international law bend to avoid breaking? If so, what kind of flexibility is appropriate with respect to both the status and the substance of treaty obligations? And what role should deliberate ambiguity play in addressing these obligations? This essay raises these questions through the concrete case of automated driving. It introduces the road traffic conventions, identifies competing interpretations of their core driver requirements, and highlights ongoing efforts at the Global Forum for Road Traffic Safety to reach a consensus.
States are investing heavily in artificial intelligence (AI) technology, and are actively incorporating AI tools across the full spectrum of their decision-making processes. However, AI tools are currently deployed without a full understanding of their impact on individuals or society, and in the absence of effective domestic or international regulatory frameworks. Although this haste to deploy is understandable given AI's significant potential, it is unsatisfactory. The inappropriate deployment of AI technologies risks litigation, public backlash, and harm to human rights. In turn, this is likely to delay or frustrate beneficial AI deployments. This essay suggests that human rights law offers a solution. It provides an organizing framework that states should draw on to guide their decisions to deploy AI (or not), and can facilitate the clear and transparent justification of those decisions.