We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure coreplatform@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The legal services market is commonly thought of as divided into two “hemispheres”—PeopleLaw, which serves individuals and small businesses, and BigLaw, which serves corporated clients. The last few decades have seen an increasing concentration of resources within the legal profession toward the latter, to the alleged detriment of the former. At the same time, the costs of accessing legal representation exceed the financial resources of many ordinary citizens and small businesses, compromising their access to the legal system. We ask: Will the adoption of new digital technologies lead to a levelling of the playing field between the PeopleLaw and BigLaw sectors? We consider this in three related dimensions. First, for users of legal services: Will technology deliver reductions in cost sufficient to enable affordable access to the legal system for consumer clients whose legal needs are currently unmet? Second, for legal services firms: Will the deployment of technology to capture economies of scale mean that firms delivering legal services across the two segments become more similar? And third, for the structure of the legal services market: Will the pursuit of economies of scale trigger consolidation that leads both segments toward a more concentrated market structure?
We set out the case for computational social science as opposed to traditional “pencil and paper” formal methods. The substantive theme of this book is the governance cycle in parliamentary democracies, but the ideas we put forward can be applied to many other areas of study.
Natural language processing techniques promise to automate an activity that lies at the core of many tasks performed by lawyers, namely the extraction and processing of information from unstructured text. The relevant methods are thought to be a key ingredient for both current and future legal tech applications. This chapter provides a non-technical overview of the current state of NLP techniques, focusing on their promise and potential pitfalls in the context of legal tech applications. It argues that, while NLP-powered legal tech can be expected to outperform humans in specific categories of tasks that play to the strengths of current ML techniques, there are severe obstacles to deploying these tools in other contexts, most importantly in tasks that require the equivalent of legal reasoning.
We set out an alternative, “top down”, approach to agent-based modeling. We develop an artificial intelligence (AI) algorithm to navigate the governance cycle using what we can think of as computational game theory. AI models have had formidable success in solving games like Chess, Go, and especially a bluffing game like Poker, suggesting they also have the potential to attack difficult political games. Addressing a simplified version of the government formation process as a noncooperative game, the AI algorithm deploys Monte Carlo Counterfactual Regret (MCCFR). During in massively repeated self-play, it samples paths though the vast game tree to relentlessly learn near optimal strategies.
Although proponents of online dispute resolution systems proclaim that their innovations will expand access to justice for so-called “simple cases,” evidence of how the technology actually operates and who is benefitting from it demonstrates just the opposite. Resolution of some disputes may be more expeditious and user interface more intuitive. But in order to achieve this, parties generally do not receive meaningful information about their rights and defenses. The opacity of the technology (ODR code is not public and unlike court appearance its proceedings are private) means that due process defects and systemic biases are difficult to identify and address. Worse still, the “simple cases” argument for ODR assumes that the dollar value of a dispute is a reasonable proxy for its complexity and significance to the parties. This assumption is contradicted by well established research on procedural justice. Moreover, recent empirical studies show that low money value cases, which dominate state court dockets, are for the most part debt collection proceedings brought by well-represented private creditors or public creditors (including courts themselves, which increasingly depend on fines and fees for their operating budget). Defendants in these proceedings are overwhelmingly unrepresented individuals. What ODR offers in these settings is not access to justice for ordinary people, but rather a powerful accelerated collection and compliance technology for private creditors and the state. This chapter examines the design features of ODR and connects them to the ideology of tech evangelism that drives deregulation and market capture, the aspirations of the alternative dispute resolution movement, and hostility to the adversary system that has made strange bedfellows of traditional proponents of access to justice and tech profiteers. The chapter closes with an analysis of front-end standards for courts and bar regulators to consider to ensure that technology marketed in the name of access to justice actually serves the legal needs of ordinary people.
While heavy-duty computational methods have revolutionized much empirical work in political science, computational analysis has yet to have much any impact on theoretical accounts of politics – in contrast to the situation in many of the natural sciences. We set here out to map a path forward in computational social science. Analyzing the complex and deductively intractable “governance cycle” that plays out in the high-dimensional issue spaces of parliamentary systems, we use two different computational approaches. One models functionally rational politicians who deploy rules of thumb to navigate their complex environment. The other deploys an artificial intelligence algorithm which systematic learns, from massively repeated self-play, to find near-optimal strategies. Future work made possible by greater computational firepower would enable better AI, more realistic ABMs, and the modeling of logrolling under the conditions of incomplete information which characterize most real-world bargaining and negotiation.
What effect will potent new legal tech tools have on the civil litigation landscape, and what can or should we do about it? Recent trends in plaintiff win rates and damages awards suggest the American civil justice system is growing more slanted toward the “haves” at the expense of the “have-nots.” Some say that AI-fired legal tech tools will reverse this trend and democratize the system. We disagree. Potent new legal tech tools are surely coming. Many are already here. But these tools are, and will likely continue to be, unevenly distributed because of the privileged access to data and technical know-how of emerging consortia of corporations, law firms, and tech companies. As a result, legal tech will, at least over the near- to medium-term, further skew the litigation playing field, shaping not just the resolution of claims but also the evolution of substantive law. As the American civil justice system enters the digital age, the haves will be propelled yet further ahead.
We outline the core argument of the book and steps taken to establish this. We begin by sketching component parts of the governance cycle: election, government formation, and government survival. Noting that the analysis of this complex system is intractable for traditions deductive methods of formal modeling, we preview two different computational methods for analyzing it. First, we model “functionally rational” artificial agents who use simple but effective rules of thumbs to navigate their high stakes but complex environment (ABM). Second, we specify an artificial intelligence (AI) algorithm which, by massively repeated self-play, teaches itself to find near-optimal strategies for playing what is in effect a traditional, but intractable, noncooperative game. We conclude by sketching the empirical approach we use to first calibrate and exercise the models on training data and then test them on out-of-sample test data.
The proposed Artificial Intelligence Act (AI Act) is the first comprehensive attempt to regulate artificial intelligence (AI) in a major jurisdiction. This article analyses Article 9, the key risk management provision in the AI Act. It gives an overview of the regulatory concept behind the norm, determines its purpose and scope of application, offers a comprehensive interpretation of the specific risk management requirements and outlines ways in which the requirements can be enforced. This article can help providers of high-risk systems to comply with the requirements set out in Article 9. In addition, it can inform revisions of the current draft of the AI Act and efforts to develop harmonised standards on AI risk management.
New digital technologies, from AI-fired 'legal tech' tools to virtual proceedings, are transforming the legal system. But much of the debate surrounding legal tech has zoomed out to a nebulous future of 'robo-judges' and 'robo-lawyers.' This volume is an antidote. Zeroing in on the near- to medium-term, it provides a concrete, empirically minded synthesis of the impact of new digital technologies on litigation and access to justice. How far and fast can legal tech advance given regulatory, organizational, and technological constraints? How will new technologies affect lawyers and litigants, and how should procedural rules adapt? How can technology expand – or curtail – access to justice? And how must judicial administration change to promote healthy technological development and open courthouse doors for all? By engaging these essential questions, this volume helps to map the opportunities and the perils of a rapidly digitizing legal system – and provides grounded advice for a sensible path forward. This book is available as Open Access on Cambridge Core.
Parliamentary democracy involves a never-ending cycle of elections, government formations, and the need for governments to survive in potentially hostile environments. These conditions require members of any government to make decisions on a large number of issues, some of which sharply divide them. Officials resolve these divisions by 'logrolling'– conceding on issues they care less about, in exchange for reciprocal concessions on issues to which they attach more importance. Though realistically modeling this 'governance cycle' is beyond the scope of traditional formal analysis, this book attacks the problem computationally in two ways. Firstly, it models the behavior of “functionally rational” senior politicians who use informal decision heuristics to navigate their complex high stakes setting. Secondly, by applying computational methods to traditional game theory, it uses artificial intelligence to model how hyper-rational politicians might find strategies that are close to optimal.
With the rise of far-reaching technological innovation, from artificial intelligence to Big Data, human life is increasingly unfolding in digital lifeworlds. While such developments have made unprecedented changes to the ways we live, our political practices have failed to evolve at pace with these profound changes. In this path-breaking work, Mathias Risse establishes a foundation for the philosophy of technology, allowing us to investigate how the digital century might alter our most basic political practices and ideas. Risse engages major concepts in political philosophy and extends them to account for problems that arise in digital lifeworlds including AI and democracy, synthetic media and surveillance capitalism and how AI might alter our thinking about the meaning of life. Proactive and profound, Political Theory of the Digital Age offers a systemic way of evaluating the effect of AI, allowing us to anticipate and understand how technological developments impact our political lives – before it's too late.
This paper discusses the accountability gap problem posed by artificial intelligence. After sketching out the accountability gap problem we turn to ancient Roman law and scrutinise how slave-run businesses dealt with the accountability gap through an indirect agency of slaves. Our analysis shows that Roman law developed a heterogeneous framework in which multiple legal remedies coexist to accommodate the various competing interests of owners and contracting third parties. Moreover, Roman law shows that addressing the various emerging interests had been a continuous and gradual process of allocating risks among different stakeholders. The paper concludes that these two findings are key for contemporary discussions on how to regulate artificial intelligence.
Automated decision-making takes up an increasingly significant place in the administrative state. This article presents a conception of discretion that is helpful for evaluating the proper place of algorithms in public decision-making. I argue that the algorithm itself is not a site of discretion. The threat is that automated decision-making alters the relationships between traditional actors in a way that can cut down discretion and human commitment. Algorithmic decision-makers can serve to fetter the discretion that the legislature and the populace expect to be exercised. We must strive to maintain discretion, moral agency, deliberative ideals, and human commitment through the system that surrounds the use of an algorithm and to develop a new expertise that can retain and exercise the expected discretion. Backing this argument are traditional legal constraints, public expectations, and administrative law principles, tied together through the organizing principle of discretion.
The promised merits of data-driven innovation in general and algorithmic systems in particular hardly need enumeration. However, as decision-making tasks are increasingly delegated to algorithmic systems, this raises questions about accountability. These pressing questions of algorithmic accountability, particularly with regard to data-driven innovation in the public sector, deserve ample scholarly attention. Therefore, this paper brings together perspectives from governance studies and critical algorithm studies to assess how algorithmic accountability succeeds or falls short in practice and analyses the Dutch System Risk Indication (SyRI) as an empirical case. Dissecting a concrete case teases out to which degree archetypical accountability practices and processes function in relation to algorithmic decision-making processes, and which new questions concerning algorithmic accountability emerge therein. The case is approached through the analysis of “scavenged” material. It was found that while these archetypical accountability processes and practices can be incredibly productive in dealing with algorithmic systems they are simultaneously at risk. The current accountability configurations hinge predominantly on the ex ante sensitivity and responsiveness of the political fora. When these prove insufficient, mitigation in medias res/ex post is very difficult for other actants. In part, this is not a new phenomenon, but it is amplified in relation to algorithmic systems. Different fora ask different kinds of medium-specific questions to the actor, from different perspectives with varying power relations. These algorithm-specific considerations relate to the decision-making around an algorithmic system, their functionality, and their deployment. Strengthening ex ante political accountability fora to these algorithm-specific considerations could help mitigate this.
We call attention to an important, but overlooked finding in research reported by Longoni, Bonezzi and Morewedge (2019). Longoni et al. claim that people always prefer a human to an artificially intelligent (AI) medical provider. We show that this was only the case when the historical performance of the human and AI providers was equal. When the AI is known to outperform the human, their data showed a clear preference for the automated provider. We provide additional statistical analyses of their data to support this claim.
The race to develop and implement autonomous systems and artificial intelligence has challenged the responsiveness of governments in many areas and none more so than in the domain of labour market policy. This article draws upon a large survey of Singaporean employees and managers (N = 332) conducted in 2019 to examine the extent and ways in which artificial intelligence and autonomous technologies have begun impacting workplaces in Singapore. Our conclusions reiterate the need for government intervention to facilitate broad-based participation in the productivity benefits of fourth industrial revolution technologies while also offering re-designed social safety nets and employment protections.
In Longoni et al. (2019), we examine how algorithm aversion influences utilization of healthcare delivered by human and artificial intelligence providers. Pezzo and Beckstead’s (2020) commentary asks whether resistance to medical AI takes the form of a noncompensatory decision strategy, in which a single attribute determines provider choice, or whether resistance to medical AI is one of several attributes considered in a compensatory decision strategy. We clarify that our paper both claims and finds that, all else equal, resistance to medical AI is one of several attributes (e.g., cost and performance) influencing healthcare utilization decisions. In other words, resistance to medical AI is a consequential input to compensatory decisions regarding healthcare utilization and provider choice decisions, not a noncompensatory decision strategy. People do not always reject healthcare provided by AI, and our article makes no claim that they do.
We clarify two points made in our commentary (Pezzo & Beckstead, 2020, this issue) on a recent paper by Longoni, Bonezzi, and Morewedge (2019). In both Experiments 1 and 4 from their paper, it is not possible to determine whether accuracy can compensate for algorithm aversion. Experiments 3A-C, however, do show a strong effect of accuracy such that AI that is superior to a human provider is embraced by patients. Many papers, including Longoni et al. tend to minimize the role of this compensatory process, apparently because it seems obvious to the authors (Longoni, Bonezzi, Morewedge, 2020, this issue). Such minimization, however, can lead to (mis)citations in which research that clearly demonstrates a compensatory role of AI accuracy is cited as non-compensatory.