We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure coreplatform@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
A regulatory liability-based approach to reducing foodborne illnesses is widely used in the U.S. But how effective is it? We exploit regulatory regime variation across states and over time to examine the relationship between product liability laws and reported foodborne illnesses. We find a positive and statistically significant relationship between strict liability with punitive damages and the number of reported foodborne illnesses. We find, however, no statistically significant relationship between strict liability with punitive damages and the number of foodborne illness-related hospitalizations and deaths.
How can tort law contribute to a better understanding of the risk-based approach in the European Union’s (EU) Artificial Intelligence Act proposal and evolving liability regime? In a new legal area of intense development, it is pivotal to make the best use possible of existing regulation and legal knowledge. The main objective of this article is thus to investigate the relationship between traditional tort law principles, with a focus on risk assessments, and the developing legislation on artificial intelligence (AI) in the EU. The article offers a critical analysis and evaluation from a tort law perspective of the risk-based approach in the proposed AI Act and the European Parliament resolution on a civil liability regime for AI, with comparisons also to the proposal for a revised and AI-adapted product liability directive and the recently proposed directive on civil liability for AI. The discussion leads to the illumination of both challenges and possibilities in the interplay between AI, tort law and the concept of risk, displaying the large potential of tort law as a tool for handling rising AI issues.
International responsibility law today is in great need of theorizing or, at least, that is the present volume’s argument. This introduction sets the stage for that argument. It unfolds in four steps: first, it clarifies the reasons that led to putting this collection of essays together and explains what it hopes to achieve; second, it introduces the main theoretical challenges addressed in the volume; third, it provides some information about how the book is organized; and, finally, it sketches out the content of its successive chapters and their articulation.
There is no issue more central to a legal order than responsibility, and yet the dearth of contemporary theorizing on international responsibility law is worrying for the state of international law. The volume brings philosophers of the law of responsibility into dialogue with international responsibility law specialists. Its tripartite structure corresponds to the three main theoretical challenges in the contemporary practice of international responsibility law: the public and private nature of the international responsibility of public institutions; its collective and individual dimensions; and the place of fault therein. In each part, two international lawyers and two philosophers of responsibility law address the most pressing questions in the theory of international responsibility law. The volume closes with a comparative 'world tour' of the responsibility of public institutions in four different legal cultures and regions, identifying stepping-stones and stumbling blocks on the path towards a common law of international responsibility.
Since the second AI revolution started around 2009, society has witnessed new and steadily more impressive applications of AI. The growth of practical applications has in turn led to a growing body of literature devoted to the liability issues of AI. The present text is an attempt to assess the current state of the law and to advance the discussion.
The rapid development of robotics and intelligent systems raises the issue of how to adapt the legal framework to accidents arising from devices based on artificial intelligence (AI) and machine learning. In the light of the numerous legal studies published in recent years, it is clear that ‘tort law and AI’ has become one of the hot topics of legal scholarship both in national and comparative contexts.
The autonomy inherent in AI systems brings legal challenges. The reason is that it is no longer possible to predict whether and how explanations and actions emanating from AI systems originate and whether they are attributable to the AI system or its operator. The core research is whether the operator of AI systems is contractually liable for the damage caused by its malfunctioning. Is contract law sufficiently prepared for the use of AI systems for contract performance? The answer is provided through a review of the common law, CISG and the German Civil Code (BGB).
Tort law has a number of systems and structures by which the system will be used to address the challenges posed by AI technologies. It will not be necessary to significantly alter our understanding of tort law’s foundations to be ready for AI, and this may significantly and potentially affect AI innovation and utilization.
Allocation of liability for harm caused at least partially by AI or medical robot can be based upon a binary distinction. The binary is the distinction between substitutive and complementary automation. When AI and robotics substitutes for a physician, strict liability is more appropriate than standard negligence doctrine. When the same technology merely assists a professional, a less stringent standard is appropriate. Such standards will help ensure that the deployment of advanced medical technologies is accomplished in a way that complements extant professionals’ skills, while promoting patient safety.
Relying chiefly on the combination of costly, unsystematic, and unreliable FDA monitoring and state negligence actions, the current postapproval system for controlling medical device risks falls far short of assuring optimal levels of safety. The reform proposal we advance comprehensively addresses these law enforcement deficiencies. The contemplated changes are straightforward and simple to implement yet would substantially reduce cost while increasing the effectiveness of both FDA monitoring and civil liability deterrence. Monitoring would be improved by requiring first-party insurers to investigate and report to the FDA the potential existence of a causal connection between the personal injury for which they are funding treatment and the patient’s (insured’s) use of or exposure to a medical device. The FDA would be authorized to enlist DOJ Civil Division enforcement of a federal cause-based strict liability action against the medical device manufacturer. The manufacturer would bear liability in full, with no reduction for risk contributions from the injured patient or other parties and would pay damages in total to the US Government. We explain the regulatory advantages of this new regulatory rule of cause-based strict liability relative to conventional rules of negligence and strict liability.
It is widely accepted that tort law operates according to a hierarchy of protected interests. Some commentators suggest that this hierarchy can be put to dispositive uses in cases characterised by a clash of interests held respectively by the claimant and defendant (the inferior interest giving way). Others argue that thinking in terms of a hierarchy of interests sheds light on three unusual aspects of tort law: viz. the existence of torts that are actionable per se, the existence of strict liability torts, and the existence of actions in which injunctive relief is routinely awarded even though compensatory damages are tort law's default remedy. This article tests both claims. It concludes that an intuitively appealing hierarchy of interests can be identified, and that it might well possess dispositive significance all other things being equal. But it also observes that all other things are seldom equal, and that departures from the hierarchy occur for various reasons that can be clearly identified and which should be borne in mind when thinking about its dispositive utility. It also urges caution in making connections between the status of certain interests and the fact that they are protected by torts that are actionable per se, strict liability torts and torts in connection with which injunctions are awarded almost as a matter of course.
In the twenty-first century, it has become easy to break IP law accidentally. The challenges presented by orphan works, independent invention or IP trolls are merely examples of a much more fundamental problem: IP accidents. This book argues that IP law ought to govern accidental infringement much like tort law governs other types of accidents. In particular, the accidental infringer ought to be liable in IP law only when their conduct was negligent. The current strict liability approach to IP infringement was appropriate in the nineteenth century, when IP accidents were far less frequent. But in the Information Age, where accidents are increasingly common, efficiency, equity, and fairness support the reform of IP to a negligence regime. Patrick R. Goold provides the most coherent explanation of how property and tort interact within the field of IP, contributing to a clearer understanding of property and tort law and private law generally.
This paper argues for a sandbox approach to regulating artificial intelligence (AI) to complement a strict liability regime. The authors argue that sandbox regulation is an appropriate complement to a strict liability approach, given the need to maintain a balance between a regulatory approach that aims to protect people and society on the one hand and to foster innovation due to the constant and rapid developments in the AI field on the other. The authors analyse the benefits of sandbox regulation when used as a supplement to a strict liability regime, which by itself creates a chilling effect on AI innovation, especially for small and medium-sized enterprises. The authors propose a regulatory safe space in the AI sector through sandbox regulation, an idea already embraced by European Union regulators and where AI products and services can be tested within safeguards.
This chapter defends the existence of negligence, understood here as a form of inadvertent moral wrongdoing for which the wrongdoer is presumptively and non-derivatively responsible and blameworthy. The wrong in question is failure of due care. This common-sensical claim needs defense in view of widespread skepticism about the possibility of non-derivative inadvertent wrongdoing. A major source of this skepticism is the conviction that all wrongdoing must ultimately derive from intentional or knowing violations.
The IoT raises several questions germane to traditional products liability law and the UCC’s warranty provisions. These include how best to evaluate and remedy consumer harms related to insecure devices, malfunctioning devices, and the termination of services and software integral to a device’s operations. Consider that the modern IoT vehicle with an infotainment system generates massive quantities of data about drivers, and that mobile applications can be used to impact the operations of these vehicles.
The birth of strict products liability is often traced to Justice Roger Traynor's famous concurrence in Escola v. Coca-Cola Bottling Co. In that case, the California Supreme Court allowed recovery to a waitress who had been injured when a coke bottle exploded in her hand. Although the majority based its decision on the evidentiary doctrine of res ipsa loquitur, Justice Traynor took the opportunity to present an argument for imposition of strict liability which relieves plaintiffs of proving negligence in products liability cases. The rewritten feminist concurrence joins Traynor's approach but provides additional gender, race, and class rationales for imposing strict liability in order to strengthen consumer protection and workplace safety. Situating the case in its World War II context, the feminist concurrence discusses the pressing need for providing tort protection to new classes of minority and female workers who had recently entered the work force and to consumers who had been encouraged to purchase a growing array of consumer goods. The accompanying commentary explains the legal evolution from negligence to strict product liability and delves into the facts and the people behind the case.
Lisa M. v. Henry Mayo Newhall Memorial Hospital exemplifies the reluctance of many courts to impose vicarious liability in cases of employee sexual abuse, treating cases of sexual abuse differently from other cases. The California Supreme Court in Lisa M. ruled against a pregnant patient who had been sexually molested by a hospital technician under the guise of performing an ultrasound examination. The court determined that the assault was “outside the scope of employment,” not fairly attributable to the employer, and the result only of “propinquity and lust.” The rewritten feminist opinion recharacterizes the assault as an outgrowth of employment, emphasizing that the employee exercised job-created control and power over plaintiff’s body. Because sexual assaults are not uncommon in the healthcare setting, the feminist opinion regards the assault as foreseeable and would allow a jury to determine whether vicarious liability is warranted because the assault was committed within the scope of employment. The accompanying commentary situates the case at the intersection of sexual violence and women’s health and examines how job-created power can make a patient vulnerable to harm by medical professionals.
Within the last decade, the Supreme Court has made two telling observations. First, vicarious liability is ‘on the move’ (per Catholic Child Welfare Socy v Institute of the Brothers of the Christian Schools (Child Welfare Socy)1); and second, ‘[t]he risk of an employee misusing his position is one of life’s unavoidable facts’ (per Mohamud v WM Morrison Supermarkets plc2). To that, add the Court of Appeal’s recent observations that employers do not ‘[become] insurers for violent or other tortious acts by their employees’ (per Bellman Northampton Recruitment Ltd3); but that ‘there will indeed be cases of independent contractors where vicarious liability will be established’ (per Barclays Bank plc v Various Claimants4).
In Chapter 5, Meritor Savings Bank, FSB v. Vinson and Oncale v. Sundowner Services deal, respectively, with the proof standards for sexual harassment and the question of whether and when Title VII forbids same-sex harassment. The rewritten Meritor dramatically alters the standard for employer liability, holding employers strictly liable for sexual harassment by supervisors, with no affirmative defense. Rewritten Oncale concludes that same-sex harassment (and hence harassment based on sexual orientation and gender identity) are illegal sex discrimination that occur “because of sex.” By making employers strictly liable, the rewritten Meritor would have effectively precluded hundreds of subsequent lower court cases and two Supreme Court cases. While the original Oncale openly refused to relate the egregious harms that the plaintiff had allegedly suffered, the rewritten opinion employs feminist storytelling techniques to demonstrate the harms suffered by the male plaintiff at the hands of his male coworkers. It explains that harassment by men of other men often occurs because of societal pressures on men to prove their masculinity and to police the boundaries of sex and sexuality.
Science fiction writers have frequently wrestled with questions relating to the ability of humans to understand the minds of robots and synthetic intellects. These works often portray robotic minds as functioning differently from those of humans. In some cases, analysis of someone’s mental processes and responses to specific situations is the only way to distinguish a human being from a robot. For example, one prominent protagonist in Isaac Asimov’s I, Robot collection of short stories is Dr. Susan Calvin, who is described as being a trailblazer the field of robot psychology. In Asimov’s stories, Dr. Calvin is the head “robopsychologist” for the fictional firm U.S. Robots and Mechanical Men, Incorporated. When a politician named Stephen Byerley runs for mayor of an unnamed major US city, Dr. Calvin is asked to determine if Byerley is a human being or a robot.