Hostname: page-component-76fb5796d-r6qrq Total loading time: 0 Render date: 2024-04-25T14:43:11.070Z Has data issue: false hasContentIssue false

Liability for robots I: legal challenges

Published online by Cambridge University Press:  25 November 2021

Alice Guerra*
Affiliation:
Department of Economics, University of Bologna, via Angherà 22, 47921 Rimini, Italy Department of Economics, University of Bologna, Bologna, Italy
Francesco Parisi
Affiliation:
Department of Economics, University of Bologna, Bologna, Italy School of Law, University of Minnesota, Minneapolis, Minnesota, USA
Daniel Pi
Affiliation:
School of Law, University of Maine, Portland, Maine, USA
*
*Corresponding author. Email: alice.guerra3@unibo.it
Rights & Permissions [Opens in a new window]

Abstract

In robot torts, robots carry out activities that are partially controlled by a human operator. Several legal and economic scholars across the world have argued for the need to rethink legal remedies as we apply them to robot torts. Yet, to date, there exists no general formulation of liability in case of robot accidents, and the proposed solutions differ across jurisdictions. We proceed in our research with a set of two companion papers. In this paper, we present the novel problems posed by robot accidents, and assess the legal challenges and institutional prospects that policymakers face in the regulation of robot torts. In the companion paper, we build on the present analysis and use an economic model to propose a new liability regime which blends negligence-based rules and strict manufacturer liability rules to create optimal incentives for robot torts.

Type
Research Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.
Copyright
Copyright © The Author(s), 2021. Published by Cambridge University Press on behalf of Millennium Economics Ltd

1. Introduction

The economic analysis of tort law assumes the existence of at least two human actors: an injurer and a victim (e.g. Miceli, Reference Miceli and Miceli2017; Shavell, Reference Shavell1980, Reference Shavell1987). Nonetheless, this assumption becomes increasingly tenuous with the advancement of automated technologies (e.g. De Chiara et al., Reference De Chiara, Elizalde, Manna and Segura-Moreiras2021; Shavell, Reference Shavell2020). Rather than still being the mere instruments of human decision-makers, machines are the decision-makers.

Since robots are insensitive to threats of legal liability, the question arises: how are we to regulate this new class of potential tortfeasors? The need for a theory to better understand robot torts is urgent, given that robots are already capable of driving automobiles and trains, delivering packages, piloting aircraft, trading stocks, and performing surgery with minimal human input or supervision. Engineers and futurists predict more revolutionary changes are still to come. How the law grapples with these emerging technologies will affect their rates of adoption and future investments in research and development. In the extremum case, the choice of liability regime could even extinguish technological advancement altogether. How the law responds to robot torts is thus an issue of crucial importance.

At the level of utmost generality, it is important to bear in mind that human negligence and machine error do not represent equivalent risks. Contrary to ordinary tools used by a human operator, robots serve as a replacement to the decision-making by a reasonable person.Footnote 1 The social cost of machine error promises to be drastically lower than that of human negligence. We should therefore welcome the development of robot technology. Even if there was nothing that the law could do to reduce the risk of robot accidents, merely encouraging the transition to robot technology alone would likely effect a dramatic reduction in accident costs.

This paper comprises four sections. Section 2 discusses the novel problems posed by robot accidents, and the reasons why robots rather than other machines need special legal treatment. Section 3 reports the current legal approaches to dealing with robot accidents. Section 4 presents an overview of the companion paper (Guerra et al., Reference Guerra, Parisi and Pi2021), where we build on the current legal analysis, to consider the possibility of blending negligence-based rules and strict liability rules to generate optimal incentives for robot torts. There, a formal economic model is used to study the incentives created by our proposed rules.

2. Rethinking legal remedies for robot torts

In an early article in Science, Duda and Shortliffe (Reference Duda and Shortliffe1983) argued that the difference between a computerized instrument and a robot is intent.Footnote 2 A computerized instrument – such as a computer program – is intended to aid human choice, while a robot becomes an autonomous knowledge-based, learning system, whose operation rivals, replaces, and outperforms that of human experts (Duda and Shortliffe, Reference Duda and Shortliffe1983: 261–268). Similar arguments on the dichotomy between mechanization and automation have been advanced in systems theory research. Among others, Rahmatian (Reference Rahmatian1990) argued that automation ‘involves the use of machines as substitutes for human labor’, whereas ‘mechanization […] can take place without true automation’ (Rahmatian, Reference Rahmatian1990: 69). While computerized instruments are mere labor-saving devices (i.e. an extension of the human body in performing a work, mostly purely physical activities), robots are also mind-saving devices (i.e. an extension not only of the human body but also of the mind – hence performing both physical and mental activities). Robots are designed to have own cognitive capabilities, including ‘deciding (choosing, selecting, etc.)’ (Rahmatian, Reference Rahmatian1990: 69). Other scholars in systems theory research have put forth essentially the same arguments. For example, Ackoff (Reference Ackoff1974) defined automated technologies as machines that perform an activity for humans much as this latter would have done it themselves, or perhaps even more efficiently (Ackoff, Reference Ackoff1974: 17). Thanks to the dynamic nature of the decision algorithm that drives their behavior, robots take into account the new information gathered in the course of their operation and dynamically adjust their way of operating, learning from their own past actions and mistakes (Bertolini et al., Reference Bertolini, Salvini, Pagliai, Morachioli, Acerbi, Cavallo, Turchetti and Dario2016; Giuffrida, Reference Giuffrida2019; Giuffrida et al., Reference Giuffrida, Lederer and Vermeys2017).

In the face of the superior decision-making skills of a robot, the relationship between a robot and its operator is different from the relationship between an ordinary tool and its user. As the skills of a robot increase, the need and desirability of human intervention decreases.Footnote 3 Even though there may be special circumstances in which human judgment may outperform robots, robots outperform humans in most situations. Humans defer to the superior skills of a robot and delegate important decisions to them (Casey, Reference Casey2019). However, as robots increase their skills, their ‘thinking’ becomes more ‘inscrutable’, falling beyond the human computational capacity (Michalski, Reference Michalski2018).Footnote 4 Given the opacity of the robot's decisions, it is very difficult – and often unwise – for operators to second-guess and override the decisions of a robot (Lemley and Casey, Reference Lemley and Casey2019).

The high complexity of the decision algorithm and the dynamic adjustment of the programing in unforeseen circumstances are what make robots different from other machines and what – according to many scholars – call for special legal treatment and a new approach to modeling accidents (Bertolini, Reference Bertolini2014).Footnote 5 Several legal and economic scholars across the world have argued for the need to rethink legal remedies as we apply them to robot torts (e.g. De Chiara et al., Reference De Chiara, Elizalde, Manna and Segura-Moreiras2021; Lemley and Casey, Reference Lemley and Casey2019; Matsuzaki and Lindemann, Reference Matsuzaki and Lindemann2016; Shavell, Reference Shavell2020; Talley, Reference Talley2019).Footnote 6 The proposed legal solutions to robot torts differ across jurisdictions (e.g. Europe versus Japan; Matsuzaki and Lindemann, Reference Matsuzaki and Lindemann2016),Footnote 7 yet the common awareness is that as the level of robot autonomy grows, under conventional torts or products liability law it will become increasingly difficult to attribute responsibility for robot accidents to a specific party (e.g. Bertolini et al., Reference Bertolini, Salvini, Pagliai, Morachioli, Acerbi, Cavallo, Turchetti and Dario2016). This problem is what Matthias (Reference Matthias2004) called the ‘responsibility gap’.Footnote 8 Matsuzaki and Lindemann (Reference Matsuzaki and Lindemann2016) noted that in both Europe and Japan, the belief is that product liability's focus on safety would impair the autonomous functioning of the robot and slow down the necessary experimentation with new programing techniques. In a similar vein, in their US-focused article titled ‘Remedies for Robots’, Lemley and Casey wrote: ‘Robots will require us to rethink many of our current doctrines. They also offer important insights into the law of remedies we already apply to people and corporations’ (Lemley and Casey, Reference Lemley and Casey2019: 1311). Robots amount to a paradigmatic shift in the concept of instrumental products, which – according to Talley (Reference Talley2019) and Shavell (Reference Shavell2020) – renders products liability law unable to create optimal incentives for the use, production, and adoption of safer robots as it is currently designed.

One of the challenges in the regulation of robots concerns accidents caused by ‘design limitations’. i.e. accidents that occur when the robot encounters a new unforeseen circumstance that causes it to behave in an undesired manner. For example, the algorithm of a self-driving car could not ‘know’ that a particular stretch of road is unusually slippery, or that a certain street is used by teenagers for drag racing. Under conventional products liability law, we could not hold a manufacturer liable for not having included that specific information in the software. Failing to account for every special circumstance cannot be regarded as a design flaw. However, we could design rules that might keep incentives in place for manufacturers to narrow the range of design limitations through greater investments in R&D and/or safety updates. In our example, we may be able to incentivize manufacturers to design self-driving cars that can ‘learn’ information and share their dynamic knowledge with other cars to reduce the risk of accidents in those locations.

Another challenge in the regulation of robots concern the double-edged capacity of robots to accomplish both useful and harmful tasks (Calo, Reference Calo2015). As such, robots are increasingly perceived in society as social actors (Rachum-Twaig, Reference Rachum-Twaig2020). Although legal scholars recognize that robots are mere physical instruments and not social actors, some have argued that from a pragmatic and theoretical perspective, granting them a legal personhood status – similar to corporations – might address some of the responsibility problems mentioned above. Eidenmüller (Reference Eidenmüller2017a, Reference Eidenmüller2017b) observed that robots appear capable of intentional acts, and they seem to understand the consequences of their behavior, with a choice of actions.Footnote 9 Furthermore, as Eidenmüller (Reference Eidenmüller2019) and Carroll (Reference Carroll2021) pointed out, there is a ‘black box’ problem, and nobody, including manufacturers, can fully foresee robots' future behavior because of machine learning and the dynamic programing of robots. This creates a difficult accountability gap between manufacturers, operators, and victims. The attribution of legal personhood to a robot is thus proposed by these scholars as a possible way to fill the accountability gap.Footnote 10 The idea of attributing legal personhood to robots has been entertained in both Europe and the USA. The European Parliament has proposed the creation of specific status for autonomous robots, a third type of personhood between natural personhood and legal personhood, called ‘electronic personhood’ (European Parliament, 2017). The mechanics of how the electronic personhood of robots would operate is broadly presented by Bertolini and Episcopo (Reference Bertolini and Episcopo2021): ‘Attributing legal personhood to a given technology, demanding its registration and compliance with public disclosure duties, minimal capital and eventually insurance coverage would turn it into the entry point for all litigation, easing the claimants’ position’ (Bertolini and Episcopo, Reference Bertolini and Episcopo2021: 14). The idea of giving some form of legal personhood to robots has also been voiced in the USA (Armour and Eidenmüller, Reference Armour and Eidenmüller2020; Carroll, Reference Carroll2021; Eidenmüller, Reference Eidenmüller2017a, Reference Eidenmüller2017b, Reference Eidenmüller2019; Jones, Reference Jones2018; Kop, Reference Kop2019), although it has never advanced to the legislative level.

Many challenges would arise in the application of existing tort instruments to robots with electronic personhood. Traditional legal rules refer to human-focused concepts such as willfulness, foreseeability, and the duty to act honestly and in good faith – concepts that no longer fit the new realities involving robots. Unlike humans, robots are insulated from self-interested incentives, which is intrinsically a good thing. However, the robots' insulation from self-interested incentives can at times be a double-edged sword. Robots are not deterred by threats of legal or financial liability, since their personal freedoms and wealth are not at stake. To cope with this shortcoming, scholars and policymakers have investigated the possibility to make robots bearers of rights and duties, and holders of assets like corporations (Bertolini, Reference Bertolini2020; Bertolini and Riccaboni, Reference Bertolini and Riccaboni2020; Giuffrida, Reference Giuffrida2019). In this respect, Eidenmüller (Reference Eidenmüller2017a) explicitly suggests that ‘smart robots should, in the not too distant future, be treated like humans. That means that they should […] have the power to acquire and hold property and to conclude contracts’. Future research should explore the extent to which these rights and financial entitlements could be leveraged by lawmakers to create incentives in robot tort situations.

3. Current legal status of robots

Robots are presently used in a variety of different settings. In some areas, they are already commonplace, while in others the technologies remain in their early stages (Księżak and Wojtczak, Reference Księżak and Wojtczak2020). There exists no general formulation of liability in case of accidents caused by robots, although some legislatures have attempted to anticipate some of the issues that could arise from robot torts. In this section, we survey some representative implementations to observe how legal rules have responded to the presence of robot actors to date.

3.1 Corporate robots

In 2014, a Hong Kong-based venture capital fund appointed a robot to its board of directors. The robot – named ‘Vital’ – was chosen for its ability to identify market trends that were not immediately detectable by humans. The robot was given a vote on the board ‘as a member of the board with observer status’, allowing it to operate autonomously when making investment decisions. Although to our knowledge Vital in Hong Kong is the only robot benefiting from a board seat, and the recognition of personhood to a robot does not extend to other jurisdictions, the World Economic Forum released a 2015 report where nearly half of the 800 IT executives surveyed expected additional robots to be on corporate boards by 2025 (World Economic Forum, 2015). At present, Hong Kong and the UK already allow the delegation of directors' duties to ‘supervised’ robots (Möslein, Reference Möslein, Barfield and Pagallo2018).

The adoption of robots in corporate boardrooms will unavoidably raise legal questions on the liability arising from directors' use of robots and for losses to corporate investors and creditors caused by robots' errors (Burridge, Reference Burridge2017; Fox et al., Reference Fox, North and Dean2019; Zolfagharifard, Reference Zolfagharifard2014). As Armour and Eidenmüller (Reference Armour and Eidenmüller2020) point out in their article titled ‘Self-Driving Corporations’, when robot directors become a reality, corporate law will need to deploy ‘other regulatory devices to protect investors and third parties from what we refer to as “algorithmic failure”: unlawful acts triggered by an algorithm, which cause physical or financial harm’. However, as of today, these questions remain without proper answers and the regulation of corporate robots has been left within the discretionary shield of corporate charters.

3.2 Aircraft autopilot

Aircraft autopilot systems are among the oldest class of robot technologies. The earliest robot flight system – a gyroscopic wing leveler – was implemented as far back as 1909 (Cooling and Herbers, Reference Cooling and Herbers1983: 693). After a century of development, autopilot technology has progressed to nearly full automation. Aircraft autopilot systems are presently capable of taking off, navigating to a destination, and landing with minimal human input.

The longevity of the autopilot technology in aviation affords us a clear exemplar of how the law can respond to the emergence of robot technology. Early treatment of autopilot cases was mixed. The standard for liability was not negligence, but rather strict liability. However, the cases were not litigated as a species of products liability. Aircraft and autopilot systems’ manufacturers were therefore rarely found liable (see Goldsmith v. Martin (221 F. Supp. 91 [1962]); see also Cooling and Herbers, Reference Cooling and Herbers1983; Eish and Hwang, Reference Eish and Hwang2015). Relatively early onward, it was established that operators (i.e. the airlines) would be held liable when an accident was caused by an autopilot system (see Nelson v. American Airlines (263 Cal. App. 2d 742 [1968])). There were two main reasons why aircraft and autopilot manufacturers were generally successful in avoiding liability: first, they were punctilious in crafting enforceable disclaimers and safety warnings, which effectively shielded them from products liability claims; and second, manufacturers aggressively litigated any claims against them, rarely settled, and thereby established favorable precedents (Leveen, Reference Leveen1983).Footnote 11

The legal outcome is largely unchanged today. It remains the airlines – not the manufacturers – that are liable for harms caused by autopilot systems. However, although the result has not changed, the legal justifications have evolved. Products’ liability law has undergone a radical transformation since the early autopilot accident cases, yet manufacturers continue to successfully avoid liability, for two reasons. First, in order for a products’ liability claim to succeed, the risk of harm must be reasonably foreseeable. Present-day aircraft manufacturing is heavily regulated, and an autopilot system that satisfactorily meets Federal Aviation Administration requirements is unlikely to be susceptible to any ‘reasonably foreseeable’ risk of harm. Direct regulation thus pre-empts tort liability. Second, even when an autopilot system is engaged, pilots have a duty to monitor and override it if operation becomes unsafe.Footnote 12 The logic is that the human operator is legally responsible for anything that a robot does, because the human ultimately chooses to engage (and not override) the machine.

3.3 Self-driving cars

Self-driving cars are the most salient future use of robot technology. For quite some time, prototypes have demonstrated the feasibility of the technology, and fully autonomous vehicles are now part of the daily reality, from private cars to commercial taxi transportation, delivery robots, and self-driving trucks.Footnote 13 In September 2016, the Department of Transportation published the Federal Automated Vehicles Policy, providing legislative guidance for states contemplating the regulation of self-driving cars (National Highway Traffic Safety Administration, 2016). A growing number of jurisdictions have enacted laws regulating the use of self-driving cars. At present, in the USA, 50 states and the District of Columbia have introduced autonomous vehicle bills.Footnote 14 However, legislative efforts thus far have principally focused on determining whether an autonomous vehicle may be operated on public roads.Footnote 15 Few jurisdictions have attempted to address the tort issues relating to self-driving cars. The Federal Automated Vehicles Policy suggests various factors that lawmakers should consider when formulating a liability rule (National Highway Traffic Safety Administration, 2016: 45–46):

States are responsible for determining liability rules for HAVs [‘highly automated vehicles’]. States should consider how to allocate liability among HAV owners, operators, passengers, manufacturers, and others when a crash occurs. For example, if an HAV is determined to be at fault in a crash then who should be held liable? For insurance, States need to determine who (owner, operator, passenger, manufacturer, etc.) must carry motor vehicle insurance. Determination of who or what is the ‘driver’ of an HAV in a given circumstance does not necessarily determine liability for crashes involving that HAV. For example, States may determine that in some circumstances liability for a crash involving a human driver of an HAV should be assigned to the manufacturer of the HAV.

Rules and laws allocating tort liability could have a significant effect on both consumer acceptance of HAVs and their rate of deployment. Such rules also could have a substantial effect on the level and incidence of automobile liability insurance costs in jurisdictions in which HAVs operate.

The few jurisdictions addressing the problem of tort liability merely push the problem back. For example, Tenn. Code Ann. §55-30-106(a) (2019) states that ‘[l]iability for accidents involving an [Automated Driving System]-operated vehicle shall be determined in accordance with product liability law, common law, or other applicable federal or state law’. Other states have enacted similarly opaque boilerplate that fails to delineate the applicable liability rule and define the legal ‘driver’ in the context of self-driving cars as being the robot itself.

In Europe, driverless vehicle policy proposals have evolved into a multinational policy initiative, under the United Nations. EU member states – and other countries, including Japan and South Korea – have agreed to common regulations for vehicles that can take over some driving functions (e.g. mandatory use of a black box; automated lane keeping systems).Footnote 16 Nonetheless, unlike in the USA, those countries currently do not have specific regulations for fully-automated cars.Footnote 17 In the UK, policies on driverless vehicles are still evolving, and press releases from the UK Department of Trans port refer to a regulatory process that has been underway since the summer of 2020 and will reach a point of greater completion in 2021 and the following years.Footnote 18

In Japan, the Road Transport Vehicle Act and the Road Traffic Act were revised to account for the possibility of autonomous vehicles driving on public roads (Imai, Reference Imai2019). Those revisions have significantly reduced the legal obstacles to the operation of quasi-autonomous driving vehicles (SAE level-3), but not for self-driving vehicles (SAE level-4). The legalization of fully autonomous vehicles is still being debated, mainly due to issues related to the determination of the rules for criminal and civil liability in the event of traffic accidents.Footnote 19

The existing regulations of automated vehicles specify safety standards and mark the boundaries of the legalization of the levels of SAE automation, but leave questions open on how existing liability rules should be tailored to allocate accident losses. For example, the interaction of negligence torts and products liability is indeterminate when the driver of a vehicle is a robot. In an ordinary car accident, the human driver is liable under negligence torts if he/she failed to exercise due care, and the manufacturer is liable if the accident was caused by a manufacturing defect or design defect. If there is neither negligence nor a product defect, then the victim is left uncompensated for the accident loss. On the one hand, it could be argued that robot torts fall within the domain of products liability because the self-driving software is simply part of the car. It is well established that automobile manufacturers have a duty to ensure that the design of an automobile mitigates danger in case of a collision (Larsen v. General Motors Corp. (391 F.2d 495 [1968])). This rule would naturally extend to self-driving cars, where manufacturers are afforded greater opportunity to avert or mitigate accidents, thereby expanding their duty of care. The standard for demonstrating a defect in a self-driving car can be inferred from existing case law. For example, in In re Toyota Motor Corp. Unintended Acceleration Mktg., Sales Practices & Prods. Liab. Litig. (978 F. Supp. 2d 1053 [2013]), vehicles produced by Toyota automatically accelerated without any driver action, and the plaintiffs were granted recovery.Footnote 20 Similar reasoning could be transposed, mutatis mutandis, to self-driving vehicles.

On the other hand, it could be argued that robot torts fall within the domain of negligence torts, because autonomous driving is not qualitatively different from earlier innovations in automobile technology. Automation is not a discrete state, but rather a continuum. The electric starter, automatic transmission, power steering, cruise control, and anti-lock brakes have all increased the control gap between the operator and vehicle. Nonetheless, none of these technological innovations have excused the operator of tort liability. The move to autonomous driving will not be instantaneous, and it is unlikely to be total.Footnote 21 It is likely that for the foreseeable future operators will have the option to disengage autonomous operation. Indeed, it is plausible that there will be conditions where it would constitute negligence to engage autonomous operation.Footnote 22 As long as the operator is ultimately in control – even if that control only extends to whether autonomous operation is engaged or not – traditional tort doctrine identifies the operator rather than the manufacturer as the party that should be the primary bearer of liability.

Thus, reasonable arguments can be advanced for assigning liability to the manufacturer as well as the operator. However, claiming that robot torts should be adjudicated ‘in accordance with product liability law, common law, or other applicable federal or state law’ merely begs the question. Tort law is a blank slate with respect to self-driving cars. The Federal Automated Vehicles Policy merely suggests factors to consider when formulating a rule, whereas it does not recommend any particular liability rule. Indeed, the few states that have acknowledged the issue have merely booted the problem to be resolved by existing law, despite the existing law's indeterminacy on the novel question.

3.4 Medical robots

Another recent and promising use of robot technology is in the field of medicine. Robots have been utilized in surgical operations since at least the 1980s, and their usage is now widespread (e.g. Lanfranco et al., Reference Lanfranco, Castellanos, Desai and Meyers2004; Mingtsung and Wei, Reference Mingtsung and Wei2020).Footnote 23 Due to their better precision and smaller size, robots can reduce the invasiveness of surgery. Previously inoperable cases are now feasible, and recovery times have been shortened.

Some surgical robots require constant input from surgeons. For example, the da Vinci and Zues robotic surgical systems use robotic arms linked to a control system manipulated by the surgeon.Footnote 24 While da Vinci and Zues systems still require input from a human operator, in other areas of medicine there is a general trend toward even greater robot autonomy. Many healthcare providers are beginning to use artificial intelligence to diagnose patients and propose treatment plans. These artificial intelligence systems analyze data, make decisions, and output results, although the results may be overridden by a human operator or supervisor (Kamensky, Reference Kamensky2020). As the technology further develops, it is plausible that surgical robots will require even less input from operators.

The applicable tort regime for medical robots is still evolving (see, e.g. Bertolini, Reference Bertolini2015 on liability regimes for robotic prostheses). Allain (Reference Allain2012) provides an overview of the tort theories that victims have used in cases involving surgical robots, including medical malpractice, vicarious liability, products liability, and the learned intermediary doctrine. In instances where medical professionals actively control surgical robots, victims often assert medical malpractice claims that focus on the negligence of the medical professional, with reasonableness standards evolving over time based on advances in technology and knowledge. If the surgical robot or artificial intelligence is deemed a medical product – and therefore subject to Food and Drug Administration regulations – victims also often assert a products liability claim against manufacturers (Marchant and Tournas, Reference Marchant and Tournas2019). However, this area of law remains relatively undefined, especially in cases involving software only.Footnote 25

As with self-driving cars, victims currently have no clear liability regime under which to seek compensation from operators or manufacturers for autonomous medical robots. At present, fully autonomous medical robots are still relatively uncommon; however, machines are taking on an ever- increasing share of decision-making tasks (see Kassahun et al., Reference Kassahun, Yu, Tibebu, Stoyanov, Giannarou, Metzen and Vander Poorten2016). The tort issues that have been litigated thus far have tended to revolve around operator error (see, e.g. Taylor v. Intuitive Surgical, Inc. (389 P.3d 517 [2017])). Thus, for our purposes – much like self-driving car accidents – the law of medical robot torts is a tabula rasa.

3.5 Military robots

Military drones and robotic weapons are another area where robot torts are implicated. These machines are already being used to identify and track military targets. Additionally, weaponized drones have been used extensively in lethal combat. The UN Security Council Report of March 8, 2021 (UN S/2021/229) regarding a Turkish military drone that autonomously hunted humans in Libya without any human input or supervision in March 2020 is just the first of possibly many instances of autonomous attacks by military robots. During recent years, media speculation about this topic has been rampant and the recent Libya incident has revived the debate.Footnote 26 It is easy to imagine other circumstances in the near future where constant communication with a human operator may not be possible and the identification and killing of an enemy target will be conducted autonomously. Should military technology continue to develop along this trajectory, it seems inevitable that other innocent targets will be attacked and eventually killed.

At present, no legal framework exists in the USA to address a mistaken killing by a military robot. Regarding the civilian use of non-military drones, the Federal Aviation Administration has begun to address ways to regulate drone usage within the USA in recent years, although it has not yet systematically addressed liability for physical harm (Hubbard, Reference Hubbard2014).Footnote 27 In an August 2021 Report released by the Human Rights Watch and the Harvard Law School International Human Rights Clinic, a proposal has been presented for a normative and operational framework on robotic weapons. States that favored an international treaty regulation of autonomous weapon systems agreed that humans must be required to play a role in the use of force, with a prohibition of robotic weapons that make life-and-death decisions without meaningful human control.Footnote 28

3.6 Other uses

Robots are also used in factories and other industrial settings due to their ability to quickly and efficiently execute repetitive tasks (Bertolini et al., Reference Bertolini, Salvini, Pagliai, Morachioli, Acerbi, Cavallo, Turchetti and Dario2016). When an industrial robot injures a victim, it often occurs in the context of employment. In such instances, workers are typically limited to claiming workers' compensation and barred from asserting tort claims against their employer. Many states include exceptions to this rule for situations where the employer acted with an intent to injure or with a ‘deliberate intention’ of exposing the worker to risk. However, thus far most of the cases brought by victims have proven unsuccessful (Hubbard, Reference Hubbard2014). Due to the relatively controlled environment of factories and other industrial settings, operators can typically ensure a relatively high probability of safe operation and prevent injuries to potential victims.

4. Looking forward

In the companion paper (Guerra et al., Reference Guerra, Parisi and Pi2021), we develop a model of liability for robots. We consider a fault-based liability regime where operators and victims bear accident losses attributable to their negligent behavior, and manufacturers are held liable for non-negligent robot accidents. We call that rule ‘manufacturer residual liability’, and show that it provides a second-best efficient set of incentives, nearly accomplishing all the four objectives of a liability regime, i.e. incentivizing (1) efficient care levels; (2) efficient investments in developing safer robots; (3) the adoption of safer robots; and (4) efficient activity levels. In our analysis, we bracket off the many interesting philosophical questions that commonly arise when considering autonomous robots' decision-making. For example, a self-driving car may be faced with a situation where the vehicle ahead of it abruptly brakes, and the robot must choose whether to collide with that vehicle or swerve onto the sidewalk, where it risks hitting pedestrians. Alternatively, a robot surgeon may be forced to make split-second decisions requiring contentious value judgments. In such instances, should the robot choose a course of action that would result in a high chance of death and low chance of healthy recovery, or one that would result in a lower chance of death but a higher chance of survival with an abysmally low quality of life? While these moral questions are serious and difficult (Giuffrida, Reference Giuffrida2019; Sparrow and Howard, Reference Sparrow and Howard2017), we exclude them from our inquiry because we do not consider them critical for the solution to the incentive problem that we are tackling. First, as a practical matter it cannot seriously be entertained that the design of rules governing such a critical area of technological progress should be put on hold until philosophers ‘solve’ the trolley problem or the infinitude of thought experiments like it. Second, even if ‘right answers’‘ exist to the ethical problems that a robot may face, its failure to choose the ‘morally correct’ course of action in some novel circumstance unanticipated by its designers can be construed by courts or lawmakers as a basis for legal liability. The objective of tort law is to minimize the social cost of accidents, and if the compliance with virtuous conduct in ethical boundary cases helps to accomplish that social objective, ethical standards should be incorporated into the legal standards of due care. Finally, if it is mandated as a matter of public policy that a certain approach to moral problems should be implemented, then this can be effected by direct regulation of robot manufacturing, outside of rules of tort liability.

Future research should consider that with some of the new programing techniques, the improvement of the robot can be carried out by the robot itself, and robots can evolve beyond the design and foresight of their original manufacturers. With these technologies, legal policymakers face what Matthias (Reference Matthias2004) described as the ‘responsibility gap’, whereby it is increasingly difficult to attribute the harmful behavior of ‘evolved’ robots to the original manufacturer. In this context, models of liability in which robots could become their own legal entity with financial assets attached to them, like a corporation, could be considered. This could, but should not necessarily require the granting of (‘electronic’) legal personhood to robots, as discussed in Eidenmüller (Reference Eidenmüller2017b) and Bertolini (Reference Bertolini2020).

The issue has several implications which deserve future investigations. For example, a simple bond or escrow requirement for robots likely to cause harm to third parties could create a liability buffer to provide compensation. Robots could be assigned some assets to satisfy future claims, and perhaps a small fraction of the revenues earned from the robot's operation could be automatically diverted to the robot's asset base, improving its solvency. Claims exceeding the robot's assets could then possibly fall on the manufacturer or the robot's operator. An institutionally more ambitious alternative would be to conceive robots as profit-maximizing entities, just like corporations, owned by single or multiple investors. More efficient and safer robots would be yielding higher profits and would attract more capital on the market, driving less efficient and unsafe robots out of the markets. This natural selection would mimic the natural selection of firms in the marketplace, and decentralize the decisions to acquire better robots and to invest in optimal safety to corporate investors. Liability would no longer risk penalizing manufacturers, but reward forward-looking investors, and possibly foster greater levels of innovative research.

As a final note, we should observe that the mere design of an applicable liability regime for robot technologies is not the only mechanism by which to incentivize further automation. There are also other means available, including regulation and mandatory adoption requirements, intellectual property rights, prizes, preferential tax treatments, or tax premiums. Insurance discounts for individuals adopting automated technologies can mitigate potentially high adoption costs. An optimal combination of these policy instruments may foster a widespread use of safer automated technologies.

Acknowledgements

The authors are indebted to Geoffrey Hodgson and the anonymous referees for their insightful and valuable comments. The authors are grateful to Carole Billiet, Emanuela Carbonara, Andrew Daughety, Herbert Dawid, Luigi A. Franzoni, Anna Guerra, Fernando G. Pomar, Roland Kirstein, Peter Krebs, Jennifer F. Reinganum, Enrico Santarelli, Eric Talley, Gerhard Wagner, and the participants of the ZiF Research Group 2021 Opening Conference ‘Economic and Legal Challenges in the Advent of Smart Products’, for discussions and helpful suggestions, and to Scott Dewey, Ryan Fitzgerald, Anna Clara Grace Parisi, and Rakin Hamad for their research contribution. An early draft of this idea by Alice Guerra and Daniel Pi was circulated under the title ‘Tort Law for Robot Actors’.

Footnotes

1 For example, an individual can ‘tell’ a Google self-driving car to take him/her home, but has only limited control on how the car will accomplish that task. Unsurprisingly, in the context of self-driving cars, the term driver is meant to include a corporation-driver (Smith, Reference Smith2014). States such as Nebraska have already adopted a broad definition of the term driver, which is ‘to operate or be in the actual physical control of a motor vehicle’ (Neb. Rev. Stat. §60-468), where operate has been defined by courts as including any mechanical or electrical agency that sets the vehicle in action or navigates the vehicle. Similarly, the National Highway Traffic Safety Administration has stated that the self-driving system can be considered the driver of the vehicle (National Highway Traffic Safety Administration, 2013).

2 On the notion of intentionality or purposefulness, see, e.g. Ackoff and Emery (Reference Ackoff and Emery1972) which defined a ‘purposeful individual or system’ as ‘one that can produce (1) the same functional type of outcome in different structural ways in the same environment and (2) can produce different outcomes in the same and different structural environments’. Importantly, a purposeful system is one that ‘can change its goal under constant conditions; it selects ends as well as means and thus displays will’. We are grateful to an anonymous referee for this suggestion.

3 Two new modes of programing that differ from the traditional algorithmic programing of robots – ‘machine learning’ and ‘genetic and evolutionary programing’ – have further expanded the horizons in the evolution of artificial intelligence. With these programing modes, robots operate with a range of programs that randomly compete against each other, and only the variations of the program that carry out tasks best will survive, while others will die (a ‘survival of the fittest’ programing approach). The surviving programs will replicate themselves, making slight modifications of their ‘genes’ for the next round of tasks (Michalski, Reference Michalski2018). See also Michalski's (Reference Michalski2018) discussion about companies that invested in robots capable of building other improved robots, thus putting human creators one more step away from future prospective victims.

4 Mulligan (Reference Mulligan2017) refers to these as ‘black box algorithms’, ones that not even the original designers and programmers can decipher. ‘Machine learning’ and ‘genetic and evolutionary programing’ (see supra note 3) have further increased the complexity and opacity of the robot's decision-making process.

5 We thank Geoffrey Hodgson for encouraging us to elaborate on the discussion that follows.

6 For a survey of the difficulties that legal scholars face when attempting to apply existing legal rules to robot torts, see Chopra and White (Reference Chopra and White2011: 119–152).

7 In their comparative study, Matsuzaki and Lindemann (Reference Matsuzaki and Lindemann2016) showed that both the legal framing and the concrete solutions to robot torts differ between Europe and Japan, especially in the legal construct of the robot as an ‘agent’ of the operator. While the European regulation debate explicitly addresses the degree of machine autonomy and its impact on legal institutions, this is not the case in Japan. See also, e.g. Leis (Reference Leis2006), MacDorman et al. (Reference Macdorman, Vasudevan and Ho2009), and Šabanović (Reference Šabanović2014).

8 Specifically, Matthias (Reference Matthias2004) wrote: ‘The rules by which [robots] act are not fixed during the production process, but can be changed during the operation of the machine, by the machine itself. This is what we call machine learning. […] [T]he traditional ways of responsibility ascription are not compatible with our sense of justice and the moral framework of society because nobody has enough control over the machine's actions to be able to assume the responsibility for them. These cases constitute what we will call the responsibility gap’ (Matthias, Reference Matthias2004: 177).

9 We are grateful to Geoffrey Hodgson and an anonymous referee for pointing this literature to us.

10 As Carroll (Reference Carroll2021) put it: ‘the legal framework that the US should ultimately adopt for the liability of self-driving cars is the notion of electronic legal personhood’.

11 The regulation of autopilots and other aviation equipment in Europe and Japan is equally nuanced. See, e.g. ‘Easy Access Rules for Airworthiness and Environmental Certification (Regulation (E.U.) No 748/2012)’ for Europe, and the ‘General Policy for Approval of Types and Specifications of Appliances’ for Japan (available at https://www.mlit.go.jp/common/001111795.pdf; last accessed October 2021).

12 14 Code of Federal Regulations §91.3 (‘The pilot in command of an aircraft is directly responsible for, and is the final authority as to, the operation of that aircraft’).

13 The Society of Automotive Engineers (SAE) defines six levels of autonomy, ranging from 0 (fully manual) to 5 (fully autonomous). Most automakers currently developing self-driving vehicles are seeking level 4 autonomy, which does not require human interaction in most circumstances. Usually these vehicles are limited to routes or areas that have previously been mapped. See SAE International. 2021. Taxonomy and Definitions for Terms Related to Driving Automation Systems for On-Road Motor Vehicles (https://www.sae.org/standards/content/j3016_202104/; last accessed October 2021).

14 The National Conference of State Legislatures maintains a database of state autonomous vehicle legislation, which is regularly updated. The searchable database is available at https://www.ncsl.org/research/transportation/autonomous-vehicles-legislative-database.aspx (last accessed October 2021).

15 For example, several states require that a self-driving car (1) satisfies the vehicular requirements of human-operated cars, (2) is capable of complying with traffic and safety laws, and (3) in case of a system failure, is capable of entering a ‘minimal risk condition’, which allows the car to achieve a reasonably safe state (e.g. by pulling over to the shoulder of the road when feasible). See, e.g. Iowa Code §321.515 (2020); La. Stat. Ann. §32:400.3 (2019); N.C. Gen. Stat. §20-401(h) (2020); Tenn. Code Ann. §§55-30-103 (2019).

16 See ECE/TRANS/WP.29/2020/81 and ‘Addendum 156 – UN Regulation No. 157’ of March 4, 2021.

17 See, ‘UN Regulation on Automated Lane Keeping Systems is milestone for safe introduction of automated vehicles in traffic’. Published on June 24, 2020. Available at https://unece.org/transport/press/un-regulation-automated-lane-keeping-systems-milestone-safe-introduction-automated (last accessed July 2021).

18 See, e.g. ‘U.K. government announces Automated Lane Keeping System call for evidence’. Published on August 18, 2020. Available at https://www.gov.uk/government/news/uk-government-announces-automated-lane-keeping-system-call-for-evidence. See also ‘Rules on safe use of automated vehicles on GB roads’. Published on April 28, 2021. Available at https://www.gov.uk/government/consultations/safe-use-rules-for-automated-vehicles-av/rules-on-safe-use-of-automated-vehicles-on-gb-roads.

19 See, e.g. ‘“Level 4” self-driving transit cars in Japan won't require licensed passengers: expert panel’. Available at https://mainichi.jp/english/articles/20210402/p2a/00m/0na/025000c; and ‘Legalization of Self-Driving Vehicles in Japan: Progress Made, but Obstacles Remain’. Available at: https://www.dlapiper.com/en/japan/insights/publications/2019/06/legalization-of-self-driving-vehicles-in-japan/ (last accessed July 2021).

20 See also Cole v. Ford Motor, Co. (900 P.2d 1059 [1995]) (holding the manufacturer liable when the cruise control function caused the car to accelerate unexpectedly). See generally Greenman v. Yuba Power Products (59 Cal. 2d 57 [1963]); Escola v. Coca-Cola Bottling Co. (24 Cal. 2d 453 [1944]); Ulmer v. Ford Motor Co. (75 Wash. 2d 522 [1969]); Restatement (Third) of Torts: Products Liability.

21 SAE Level 5 autonomous vehicles are defined as being able to drive under all conditions, but there may still be limits to certain components, e.g. heavy rain making it difficult to distinguish objects, under which an autonomous vehicle may not operate, but a human could.

22 For example, see Plumer, Brad (2016). ‘5 Big Challenges That Self-Driving Cars Still Have to Overcome’. Vox. Available at https://www.vox.com/2016/4/21/11447838/self-driving-cars-challenges-obstacles (last updated April 21, 2016).

23 It is also worth mentioning the spate of peer-reviewed scholarly journals dedicated to the topic that arose during this period. For example, the International Journal of Medical Robotics and Computer Assisted Surgery, established in 2004, the Journal of Robotic Surgery, established in 2007, the American Journal of Robotic Surgery, established in 2014, and the Journal of Medical Robotics Research, established in 2016.

24 In a recent case involving the da Vinci robotic surgical system, a Florida man died as a result of a botched kidney surgery. The family claimed negligence based on the surgeon's lack of training and experience with the system, but the case was later settled out of court (Allain, Reference Allain2012).

25 In one recent decision involving a surgical robot, Thomas v. Intuitive Surgical, Inc. (389 P.3d 517 [2017]), the court sought to decide whether the surgeon – i.e. the operator – or the manufacturer of the robotic surgical system would be liable and whether the manufacturer had a duty to warn. The court held that medical device manufacturers have a duty to warn hospitals and operators of the systems of the risks of the system.

26 See, for example, Campaign to Stop Killer Robots, ‘Country Positions on Negotiating a Treaty to Ban and Restrict Killer Robots’ (September 2020). Available at https://www.stopkillerrobots.org/wp-content/uploads/2020/05/KRC_CountryViews_25Sep2020.pdf; (last accessed August 2021); Grothoff, Christian and J.M. Porup. 2016. The NSA's SKYNET Program May Be Killing Thousands of Innocent People. Ars Technica, February 16. Available at https://arstechnica.co.uk/security/2016/02/the-nsas-skynet-program-may-be-killing-thousands-of-innocent-people/; ‘A Military Drone With A Mind Of Its Own Was Used In Combat, U.N. Says’ (June 1, 2021). Available at https://www.npr.org/2021/06/01/1002196245/a-u-n-report-suggests-libya-saw-the-first-battlefield-killing-by-an-autonomous-d (last accessed August 2021).

27 For a review of unmanned aerial vehicles regulations on the global scale, see Stöcker et al. (Reference Stöcker, Bennett, Nex, Gerke and Zevenbergen2017) and Jones (Reference Jones2017); for European cases, see Esteve and Domènech (Reference Esteve and Domènech2017); for military drone laws and policies in Japan, see Sheets (Reference Sheets2018).

28 See the Report ‘Areas of Alignment: Common Visions for a Killer Robots Treaty’ which presents the objections expressed by governments at the official Convention on Conventional Weapons (held in September 2020) to delegating life-and-death decisions to robots. Available at https://www.hrw.org/sites/default/files/media_2021/07/07.2021%20Areas%20of%20Alignment.pdf (last accessed August 2021).

References

Ackoff, R. L. (1974), Redesigning the Future, New York: Wiley.Google Scholar
Ackoff, R. L. and Emery, F. (1972), On Purposeful Systems, Chicago: Aldine Atherton.Google Scholar
Allain, J. S. (2012), ‘From Jeopardy to Jaundice: The Medical Liability Implications of Dr Watson and Other Artificial Intelligence Systems’, Louisiana Law Review, 73(4): 1049–1079.Google Scholar
Armour, J. and Eidenmüller, H. (2020), ‘Self-Driving Corporations?’, Harvard Business Law Review, 10: 87–116.Google Scholar
Bertolini, A. (2014), ‘Robots and Liability - Justifying a Change in Perspective’, in F. Battaglia, N. Mukerji and J. Nida Rumelin (eds), Rethinking Responsibility in Science and Technology, 143–166. Pisa, Italy: Pisa University Press.Google Scholar
Bertolini, A. (2015), ‘Robotic Prostheses as Products Enhancing the Rights of People with Disabilities. Reconsidering the Structure of Liability Rules’, International Review of Law, Computers & Technology, 29(2–3): 116136.CrossRefGoogle Scholar
Bertolini, A. (2020), ‘Artificial Intelligence and Civil Liability’, Bruxelles: European Parliament – Committee on Legal Affairs, 608: 1132.Google Scholar
Bertolini, A. and Episcopo, F. (2021), ‘The Expert Group's Report on Liability for Artificial Intelligence and Other Emerging Digital Technologies: A Critical Assessment’, European Journal of Risk Regulation, 12(3): 116.CrossRefGoogle Scholar
Bertolini, A. and Riccaboni, M. (2020), ‘Grounding the Case for a European Approach to the Regulation of Automated Driving: The Technology-Selection Effect of Liability Rules’, European Journal of Law and Economics, 51(27): 243284.CrossRefGoogle Scholar
Bertolini, A., Salvini, P., Pagliai, T., Morachioli, A., Acerbi, G., Cavallo, F., Turchetti, G. and Dario, P. (2016), ‘On Robots and Insurance’, International Journal of Social Robotics, 8(3): 381391.CrossRefGoogle Scholar
Burridge, N. (2017), ‘Artificial Intelligence Gets a Seat in the Boardroom’, Nikkei Asian Review, 10 May 2017. https://asia.nikkei.com/Business/Companies/Artificial-intelligence-gets-a-seat-in-the-boardroom. Accessed 17 Nov 2021Google Scholar
Calo, R. (2015), ‘Robotics and the Lessons of Cyberlaw’, California Law Review, 103(3): 513564.Google Scholar
Carroll, K. (2021), ‘Smart Cars are Getting Smarter: Legal Personhood for Self-Driving Vehicles’, Working Paper Seton Hall University eRepository.Google Scholar
Casey, B. (2019), ‘Robot Ipsa Loquitur’, Georgetown Law Journal, 108(2): 225286.Google Scholar
Chopra, S. and White, L. F. (2011), A Legal Theory for Autonomous Artificial Agents, Ann Arbor, MI: University of Michigan Press.CrossRefGoogle Scholar
Cooling, J. E. and Herbers, P. V. (1983), ‘Considerations in Autopilot Litigation’, Journal of Air Law and Commerce, 48(4): 693724.Google Scholar
De Chiara, A., Elizalde, I., Manna, E. and Segura-Moreiras, A. (2021), ‘Car Accidents in the Age of Robots’, International Review of Law and Economics, 68(in progress): 106022.CrossRefGoogle Scholar
Duda, R. O. and Shortliffe, E. H. (1983), ‘Expert Systems Research’, Science, 220(4594): 261268.CrossRefGoogle ScholarPubMed
Eidenmüller, H. (2017a), ‘The Rise of Robots and the Law of Humans’, Oxford Legal Studies Research Paper No 27/2017, https://dx.doi.org/10.2139/ssrn.2941001 (accessed 17 Nov 2021).Google Scholar
Eidenmüller, H. (2017b), ‘Robot's Legal Personality’, https://www.law.ox.ac.uk/business-law-blog/blog/2017/03/robots%E2%80%99-legal-personality, online (08 Mar 2017) (accessed 22 August 2021).Google Scholar
Eidenmüller, H. (2019), ‘Machine Performance and Human Failure: How Shall We Regulate Autonomous Machines’, Journal of Business & Technology Law, 15(1): 109134.Google Scholar
Eish, M. and Hwang, T. (2015), ‘Praise the Machine! Punish the Human!’, Comparative Studies in International Systems, Working Paper.Google Scholar
Esteve, J. S. and Domènech, C. B. (2017), ‘Rights and Science in the Drone era: Actual Challenges in the Civil use of Drone Technology’, Rights and Science: R&S, 0(0): 117133.Google Scholar
European Parliament (2017), ‘European Parliament resolution of 16 February 2017 with recommendations to the Commission on Civil Law Rules on Robotics (2015/2103-INL)’, Estrasburgo: Parlamento Europeu.Google Scholar
Fox, J., North, J. and Dean, J. (2019), ‘AI in the Boardroom: Could Robots Soon be Running Companies?’, Governance Directions, 71(10): 559564.Google Scholar
Giuffrida, I. (2019), ‘Liability for AI Decision-Making: Some Legal and Ethical Considerations’, Fordham Law Review, 88(2): 439456.Google Scholar
Giuffrida, I., Lederer, F. and Vermeys, N. (2017), ‘A Legal Perspective on the Trials and Tribulations of AI: How Artificial Intelligence, the Internet of Things, Smart Contracts, and Other Technologies will Affect the Law’, Case Western Reserve Law Review, 68(3): 747782.Google Scholar
Guerra, A., Parisi, F. and Pi, D. (2021), ‘Liability for Robots II: An Economic Analysis’, Journal of Institutional Economics, published online. doi:10.1017/S1744137421000837.CrossRefGoogle Scholar
Hubbard, F. P. (2014), ‘Sophisticated Robots: Balancing Liability, Regulation, and Innovation’, Florida Law Review, 66(5): 18031872.Google Scholar
Imai, T. (2019), ‘Legal Regulation of Autonomous Driving Technology: Current Conditions and Issues in Japan’, IATSS Research, 43(4): 263267.CrossRefGoogle Scholar
Jones, T. (2017), ‘International Commercial Drone Regulation and Drone Delivery Services’, Tech. Rep., RAND.CrossRefGoogle Scholar
Jones, C. (2018), ‘The Robot Koseki: A Japanese Law Model for Regulating Autonomous Machines’, Journal of Business & Technology Law, 14(2): 403468.Google Scholar
Kamensky, S. (2020), ‘Artificial Intelligence and Technology in Health Care: Overview and Possible Legal Implications’, DePaul Journal of Health Care Law, 21(3): 118.Google Scholar
Kassahun, Y., Yu, B., Tibebu, A. T., Stoyanov, D., Giannarou, S., Metzen, J. H. and Vander Poorten, E. (2016), ‘Surgical Robotics Beyond Enhanced Dexterity Instrumentation: A Survey of Machine Learning Techniques and Their Role in Intelligent and Autonomous Surgical Actions’, International Journal of Computer Assisted Radiology and Surgery, 11(4): 553568.CrossRefGoogle ScholarPubMed
Kop, M. (2019), ‘AI & Intellectual Property: Towards an Articulated Public Domain’, Texas Intellectual Property Law Journal, 28(3): 297342.Google Scholar
Księżak, P. and Wojtczak, S. (2020), ‘AI versus Robot: In Search of a Domain for the New European Civil Law’, Law, Innovation and Technology, 12(2): 297317.CrossRefGoogle Scholar
Lanfranco, A. R., Castellanos, A. E., Desai, J. P. and Meyers, W. C. (2004), ‘Robotic Surgery: A Current Perspective’, Annals of Surgery, 239(1): 1421.CrossRefGoogle ScholarPubMed
Leis, M. J. (2006), Robots-our Future Partners?! A Sociologist's View from a German and Japanese Perspective, Marburg, Germany: Tectum-Verlag.Google Scholar
Lemley, M. A. and Casey, B. (2019), ‘Remedies for Robots’, The University of Chicago Law Review, 86(5): 13111396.Google Scholar
Leveen, S. A. (1983), ‘Cockpit Controversy: The Social Context of Automation in Modern Airliners’, Ph.D. Dissertation, Cornell University, Department of Science and Technology Studies.Google Scholar
Macdorman, K. F., Vasudevan, S. K. and Ho, C.-C. (2009), ‘Does Japan Really Have Robot Mania? Comparing Attitudes by Implicit and Explicit Measures’, AI & society, 23(4): 485510.CrossRefGoogle Scholar
Marchant, G. E. and Tournas, L. M. (2019), ‘AI Health Care Liability: From Research Trials to Court Trials’, Journal of Health & Life Sciences Law, 12(2): 2341.Google Scholar
Matsuzaki, H. and Lindemann, G. (2016), ‘The Autonomy-Safety-Paradox of Service Robotics in Europe and Japan: A Comparative Analysis’, AI & Society, 31(4): 501517.CrossRefGoogle Scholar
Matthias, A. (2004), ‘The Responsibility Gap: Ascribing Responsibility for the Actions of Learning Automata’, Ethics and Information Technology, 6(3): 175183.CrossRefGoogle Scholar
Miceli, T. J. (2017), ‘An Economic Model of Accidents: The Model of Precaution’, in Miceli, T. J. (ed), The Economic Approach to Law, (Third edn), Redwood City: Stanford University Press, pp. 1837.CrossRefGoogle Scholar
Michalski, R. (2018), ‘How to Sue a Robot’, Utah Law Review, 2018(5): 10211072.Google Scholar
Mingtsung, C. and Wei, Q. (2020), ‘Research on Infringement of Artificial Intelligence Medical Robot’, in Proceedings of the 2020 4th International Seminar on Education, Management and Social Sciences (ISEMSS 2020), Dordrecht, The Netherlands: Atlantis Press, pp. 497500.Google Scholar
Möslein, F. (2018), ‘Robots in the Boardroom: Artificial Intelligence and Corporate Law’, in Barfield, W. , and Pagallo, U. (eds), Research Handbook on the Law of Artificial Intelligence, Cheltenham, UK: Edward Elgar Publishing, pp. 649670.CrossRefGoogle Scholar
Mulligan, C. (2017), ‘Revenge Against Robots’, South Carolina Law Review, 69(3): 579596.Google Scholar
National Highway Traffic Safety Administration (2013), ‘Preliminary Statement of Policy Concerning Automated Vehicles’.Google Scholar
National Highway Traffic Safety Administration (2016), ‘Federal Automated Vehicles Policy’, Official Policy.Google Scholar
Rachum-Twaig, O. (2020), ‘Whose Robot is it Anyway?: Liability for Artificial-Intelligence-Based Robots’, University of Illinois Law Review, 2020(4): 11411176.Google Scholar
Rahmatian, S. (1990), ‘Automation Design: Its Human Problems’, Systems Practice, 3(1): 6780.CrossRefGoogle Scholar
Šabanović, S. (2014), ‘Inventing Japan's “Robotics Culture”: The Repeated Assembly of Science, Technology, and Culture in Social Robotics’, Social Studies of Science, 44(3): 342367.CrossRefGoogle ScholarPubMed
Shavell, S. (1980), ‘Strict Liability versus Negligence’, The Journal of Legal Studies, 9(1): 125.CrossRefGoogle Scholar
Shavell, S. (1987), Economic Analysis of Accident Law, Cambridge, Massachusetts: Harvard University Press.CrossRefGoogle Scholar
Shavell, S. (2020), ‘On the Redesign of Accident Liability for the World of Autonomous Vehicles’, The Journal of Legal Studies, 49(2): 243285.CrossRefGoogle Scholar
Sheets, K. D. (2018), ‘The Japanese Impact on Global Drone Policy and Law: Why a Laggard United States and Other Nations Should Look to Japan in the Context of Drone Usage’, Indiana Journal of Global Legal Studies, 25(1): 513538.CrossRefGoogle Scholar
Smith, B. W. (2014), ‘Automated Vehicles are Probably Legal in the United States’, Texas A&M Law Review, 1(3): 411522.CrossRefGoogle Scholar
Sparrow, R. and Howard, M. (2017), ‘When Human Beings are Like Drunk Robots: Driverless Vehicles, Ethics, and the Future of Transport’, Transportation Research Part C: Emerging Technologies, 80: 206215.CrossRefGoogle Scholar
Stöcker, C., Bennett, R., Nex, F., Gerke, M. and Zevenbergen, J. (2017), ‘Review of the Current State of UAV Regulations’, Remote Sensing, 9(5): 459485.CrossRefGoogle Scholar
Talley, E. (2019), ‘Automatorts: How Should Accident Law Adapt to Autonomous Vehicles? Lessons from Law and Economics’, Draft available at: https://www.hoover.org/sites/default/files/ip2-19002-paper.pdf.Google Scholar
World Economic Forum (2015), ‘Deep Shift: Technology Tipping Points and Societal Impact’.Google Scholar
Zolfagharifard, E. (2014), ‘Would You Take Orders from a Robot? An Artificial Intelligence Becomes the World's First Company Director’, Daily Mail.Google Scholar