Skip to main content Accessibility help
×
Hostname: page-component-848d4c4894-pftt2 Total loading time: 0 Render date: 2024-04-30T13:45:22.742Z Has data issue: false hasContentIssue false

Part III - Responsible AI Liability Schemes

Published online by Cambridge University Press:  28 October 2022

Silja Voeneky
Affiliation:
Albert-Ludwigs-Universität Freiburg, Germany
Philipp Kellmeyer
Affiliation:
Medical Center, Albert-Ludwigs-Universität Freiburg, Germany
Oliver Mueller
Affiliation:
Albert-Ludwigs-Universität Freiburg, Germany
Wolfram Burgard
Affiliation:
Technische Universität Nürnberg

Summary

Type
Chapter
Information
The Cambridge Handbook of Responsible Artificial Intelligence
Interdisciplinary Perspectives
, pp. 185 - 226
Publisher: Cambridge University Press
Print publication year: 2022
Creative Commons
Creative Common License - CCCreative Common License - BYCreative Common License - NCCreative Common License - ND
This content is Open Access and distributed under the terms of the Creative Commons Attribution licence CC-BY-NC-ND 4.0 https://creativecommons.org/cclicenses/

12 Liability for Artificial Intelligence The Need to Address Both Safety Risks and Fundamental Rights Risks

Christiane Wendehorst
I. Introduction

On 21 April 2021, the European Commission published its package of measures on a European approach to artificial intelligence (AI), consisting of a communication,Footnote 1 accompanied by an updated Coordinated Plan on AIFootnote 2 and a proposal for a horizontal regulation (Artificial Intelligence Act, AIA)Footnote 3 with nine annexes. This package is the first of three inter-related legal initiatives announced by the Commission with the aim of making Europe a safe and innovation friendly environment for the development of AI. This first initiative aims to establish a European legal framework for AI to address fundamental rights and safety risks specific to AI systems. The second initiative is the revision of sectoral and more horizontal safety legislation. A proposal for a new Machinery RegulationFootnote 4 with eleven annexes was already published on the same day as the AI package, addressing an important aspect of AI usually referred to as ‘robotics’, and a proposal for a new General Product Safety RegulationFootnote 5 followed soon after. Parliament and Council are currently preparing both files for the trilogues. Finally, the third initiative announced is the introduction of EU rules to address liability issues related to new technologies, including AI systems. The Public Consultation for this initiative has already been closed and a proposal is planned for the third quarter of 2022.Footnote 6 This third initiative will comprise measures adapting the liability framework to the challenges of new technologies, including AI, to ensure that victims who suffer damage to their life, health, or property as a result of new technologies have access to the same compensation as victims of other technologies. In the Inception Impact Assessment, a revision of the Product Liability Directive (PLD),Footnote 7 and a legislative proposal with regard to the liability for certain AI systems are identified as policy options.Footnote 8

Given that liability for AI and other emerging digital technologies had been on the agenda for some time, it may come as a surprise that liability legislation figures last on the agenda. An Expert Group on Liability and new Technologies was established in 2018. It was divided into two formations, one dealing specifically with the PLD and being largely dominated by stakeholders, the other – the so-called New Technologies Formation (EG-NTF) – having a broader mandate and consisting mainly of academics.Footnote 9 Only the NTF ever published an official written report,Footnote 10 which then served, inter alia, as a basis for the European Commission’s report on the safety and liability implications of AI, the Internet of Things (IoT), and roboticsFootnote 11 of 19 February 2020, which formed part of the 2020 AI package and accompanied the Commission White Paper on AI.Footnote 12

A major driver of activities in the field of liability has certainly been the European Parliament. After its first resolution in 2017,Footnote 13 which included the much-quoted and much-criticised plea for electronic personhood,Footnote 14 the European Parliament passed another resolution on 20 October 2020 that includes a full-fledged ‘Proposal for a Regulation of the European Parliament and of the Council on liability for the operation of AI systems’.Footnote 15 This proposal is certainly much more mature than the 2017 resolution and bears a striking resemblance to policy considerations made within parts of the European Commission.

Whether the Commission will follow the recommendations of Parliament or take a different approach remains yet to be seen. Because AI liability is a subject matter that might be addressed within different regulatory and legal frameworks for which different Directorates General of the Commission and different Committees within the Parliament are responsible, the matter remains highly controversial. This paper analyses the different risks posed by AI, and why AI challenges existing liability regimes. It also explains the main solutions put forward so far and evaluates them, concluding that different solutions may be appropriate for different types of risk.

II. Dimensions of AI and Corresponding Risks Posed

The challenges posed by AI and modern digital ecosystems in general – such as opacity (‘black box-effect’), complexity, and partially ‘autonomous’ and unpredictable behaviour – are similar, irrespective of where and how AI is deployed. However, at a somewhat lower level of abstraction, the potential risks associated with AI usually appear to be falling into either of two dimensions: ‘safety risks’ and ‘fundamental rights risks’.Footnote 16 These two types of risks are just the downside of our expectations of AI and of the promises made by those developing and deploying the technology, that is, that AI will both help by improving health and saving lives and the climate, and assist us in making better decisions, enhancing fairness, and developing into a better society (Figure 12.1).

Figure 12.1 The ‘physical’ and the ‘social’ dimensions of risks associated with AI

1. Traditional (Physical) Safety Risks

Traditionally, death, personal injury, and damage to property have played a special role within safety and liability frameworks. These traditional types of risks can more specifically be described as ‘physical’ safety risks, but are normally referred to simply as ‘safety risks’. These risks continue to play their very special role in the digital era, but the concept must be understood more broadly to include not only death, personal injury, and damage to property in the traditional sense, but also damage to data and to the functioning of other digital systems. Where, for example, the malfunctioning of software causes the erasure of important customer data stored by the data holder in some cloud space, this should have the same legal effect as the destruction of a hard disk drive or of paper files with customer data (which is not to say that all data should automatically be treated in exactly the same way as tangible property in the tort liability context).Footnote 17 Likewise, where tax management software causes the victim’s customer management software to collapse, this must be considered a safety risk, irrespective of whether the customer management software was run on the victim’s hard disk drive or somewhere in the cloud within a SaaS scheme. While this is unfortunately still disputed under national tort law,Footnote 18 any attempt to draw a line between data stored on a physical medium owned by the victim and data stored otherwise seems to be completely outdated and fails to recognise the functional equivalence of different forms of storage.

2. Fundamental Rights Risks

‘Fundamental rights risks’ are associated with the social dimension of AI. They include discrimination, exploitation, manipulation, humiliation, oppression, and similar undesired effects that are – at least primarily – non-economic (non-material) in nature and that are not just the result of physical harm (as the latter would be dealt with under traditional regimes of compensation for pain and suffering, etc). Such risks have traditionally been dealt with primarily by special legal regimes, such as data protection law, anti-discrimination law or, more recently, law against hate speech on the Internet and similar legal regimes.Footnote 19 There is also a growing body of tort law that deals specifically with the infringement of personality rights.Footnote 20 Even though the concept of ‘fundamental rights’ is focused on individual rights, the term ‘fundamental rights risks’ should be understood more broadly as encompassing also risks of a more collective nature, for example, risks for the rule of law, democracy, and freedom of expression in general.Footnote 21

While the fundamental rights aspect and, therefore, the non-economic aspect of such risks is in the foreground, these risks can, of course, entail economic risks for the affected individual or for society as a whole. For instance, AI systems used for recruitment that favour male applicants create a social risk for female applicants by discriminating against them, but this also leads to adverse economic effects for the affected women.

3. Overlaps and In-Between Categories

The division between safety and fundamental rights risks is generally not always clear-cut and should not be overestimated. There are not only clear overlaps, but also a considerable grey area of a number of important risks. For instance, adverse psychological effects can be a very traditional safety risk,Footnote 22 where the effect is a diagnosed illness according to WHO criteria (such as depression), but also a fundamental rights risk that is associated with the social dimension of AI where the effect is not a diagnosed illness, but, for example, just stress or anxiety. It is not always easy to draw a line between the two.Footnote 23

a. Cybersecurity and Similar New Safety Risks

Digitalisation has given rise to a number of very special risks that are not easy to classify. They are essentially safety risks, albeit safety risks of a nature that is somewhat in a grey zone between ‘physical’ and ‘intangible’. Such special safety risks include the ‘data security’ aspect of data protection and privacy (i.e. prevention of data leaks), cybersecurity and harm to the network, and fraud or illegal collusion, to name but a few. They are recognised as relevant safety risks under selected pieces of safety legislation, in particular the Radio Equipment Directive (RED)Footnote 24 and the Medical Device Regulation (MDR).Footnote 25 Digital risks are also recognised in the Proposal for a Regulation on Machinery ProductsFootnote 26 and the Proposal for a Regulation on General Product Safety,Footnote 27 which are intended to replace the Directives currently in force. However, these (digital) risks will often primarily relate to the ‘physical’ dimension of safety, because data theft and manipulation or the breakdown of networks and other essential infrastructures will indirectly, at least in most cases, lead to damage to property in the broader sense or even threaten the health and life of persons.

b. Pure Economic Risks

Pure economic risksFootnote 28 are economic risks that are not just the result of the realisation of physical risks, such as personal injury or property damage. Where medical AI causes a surgery to fail, resulting in personal injury and consequently in hospitalisation, the costs of hospitalisation is an economic harm, but not a ‘pure’ economic harm because it results from the personal injury. Where, however, AI manipulates consumers and makes them buy overpriced products, the financial loss caused is not in any way connected with a safety risk and, therefore, qualifies as a pure economic risk (also referred to as immaterial harm). For pure economic risks to be considered legally relevant outside the realm of contractual liability, most legal systems require additional elements, such as fraud or other illegal behaviour or conduct that is considered socially inacceptable.Footnote 29 Pure economic risks, at least when legally relevant, might, therefore, be closer to fundamental rights risks.

III. AI As a Challenge to Existing Liability Regimes
1. Classification of Liability Regimes

While extra-contractual liability law has – beyond product liability law and some few specific areas – so far largely been a matter for the Member States, and while there exists a broad variety of different liability regimes at national level, it is still possible to group liability regimes according to their general characteristics.

a. Fault Liability

Fault liability has been the most important pillar of extra-contractual liability in a majority of European jurisdictions.Footnote 30 Liability always requires a sufficient justification for shifting loss from the person who originally suffered the damage (the victim) to a person who caused the damage (the tortfeasor). In the case of fault liability, the fault of the tortfeasor, which is usually either intent or negligence with many different shades and gradations, such as gross negligence or recklessness, is the justification. If damage is caused by mere negligence, further conditions must usually be met, otherwise liability could potentially escalate indefinitely. Jurisdictions use different tools in order to keep liability within reasonable boundaries. Often, there is a requirement that the potential tortfeasor’s conduct was somehow objectionable, that is, that it was either violating the law, or public policy, or infringing rights and legally protected interests whose absolute integrity is so vital that any kind of infringement must, per se, be considered as presumably unlawful. The latter is usually the case where human life, health, or bodily integrity are at stake or where the infringement concerns clearly defined property rights.Footnote 31

b. Non-Compliance Liability

Liability may also be triggered by the infringement of particular laws or particular standards whose purpose includes the prevention of harm of the type at hand. We find this type of liability regime both at EU level and at national level. An example for non-compliance liability at EU level is Article 82 of the General Data Protection Regulation (GDPR),Footnote 32 which attaches liability to any infringement of the requirements set out by the GDPR. Further, yet very different, examples can be found in EU non-discrimination legislation such as Council Directive 2004/113/EC.Footnote 33 Non-discrimination law obliges Member States to introduce into their national legal systems the legal measures necessary to ensure real and effective compensation for loss and damage sustained by a person injured as a result of discrimination, in a way which is dissuasive and proportionate to the damage suffered. In this context, Member States must ensure that, when a plaintiff establishes facts from which it may be presumed that there has been direct or indirect discrimination, it shall be for the respondent to prove that there has been no breach of anti-discrimination law.Footnote 34 Another example of non-compliance liability can be found in the financial sector. Where issuers of a financial instrument do not publicly disclose inside information concerning them, they become liable for any damage caused by the failure to do so.Footnote 35

At the national level, there may be both general clauses attaching liability to the infringement of protective statutory provisionsFootnote 36 and specific liability regimes attaching liability to non-compliance with very particular standards. Non-compliance liability is always of an accessory nature, in other words, there needs to be a basic regime setting out in some detail the duties and obligations to be met in order to be considered compliant. It should also be noted that, under a number of national jurisdictions, efforts are being made to impose non-compliance liability only in cases where the potential tortfeasor was at fault.Footnote 37

c. Defect and Mal-Performance Liability

A number of different liability regimes in jurisdictions in Europe may be described as types of ‘defect liability’ (or, in the case of services, ‘mal-performance liability’), although this is certainly not a common technical term. In the extra-contractual realm, the most important form of defect liability is product liability, which has been harmonised by the Product Liability Directive (PLD).Footnote 38 Product liability does not require fault on the part of the producer, but it still requires a particular shortcoming in the producer’s sphere, in that it requires that the product put into circulation was defective at the time when it left that sphere. The development risk defence (i.e. the defence relying on the fact that the defect, according to the state of the art in science and technology, could not have been detected when the product was put into circulation), which Member States were free to implement or not, moves product liability somewhat into the vicinity of fault liability.Footnote 39

Product liability is only the most conspicuous form of defect liability and the one where the term ‘defect’ is in fact used. However, when looking more closely at liability regimes in national jurisdictions, it becomes apparent that there is a panoply of different forms of liability that are all based on the unsafe or otherwise objectionable state of a particular object within the liable person’s sphere of control. Many of these forms of liability are somewhat at the borderline between fault liability and defect liability, as they are based on a presumption of fault, which the liable person is free to rebut under particular circumstances. Even some forms of vicarious liability under national law may be qualified, at a closer look, as forms of defect or mal-performance liability. For example, vicarious liability may be based on the generally ‘unfit’ nature of the relevant auxiliary in terms of personality or skills,Footnote 40 or on the fact that the human auxiliary failed to meet a particular objective standard of care.

d. Strict Liability

The term ‘strict liability’, although often used with a broader meaning, should be reserved for such forms of liability that do not require any kind of defect or mal-performance but are more or less based exclusively on causation. At a closer look, some further requirements beyond causation may have to be met, such as that the risk that ultimately materialised was within the range of risks covered by the relevant liability regime, and there may possibly be defences, such as a force majeure defence.Footnote 41

Strict liability is usually imposed only in situations where significant and/or frequent harm may occur despite the absence of any fault or any identifiable defect, mal-performance, or other non-compliance. It is also imposed where such elements would be so difficult for the victim to prove that requiring such proof would lead to massive under-compensation or inefficiency. Paradigm cases are the operation of aircraft, railways, ships, or motor vehicles, although solutions in the EU Member States differ, as does the attitude towards a ‘general clause’ of strict liability for unforeseen but parallel cases.Footnote 42 While there are also examples in national law where something close to strict liability is extended to all objects,Footnote 43 this is more or less exceptional and often narrowed down by case law.

2. Challenges Posed by AI

The mass rollout of AI and related technologies poses numerous challenges to existing liability regimes. Some of these challenges have their origin in interconnectedness, which is not strictly related to AI, but to digital ecosystems more generally. Other challenges are truly specific to AI.

a. Liability for the Materialisation of Safety Risks
(i) ‘Complexity’, ‘Openness’, and ‘Vulnerability’ of Digital Ecosystems

With enhanced connectivity and data flows in the Internet of Things (IoT), everything potentially affects the behaviour of everything, and it may become close to impossible for a victim to prove what exactly caused the damage (‘complexity’Footnote 44). For example, where a smart watering system for the garden floods the premises, this may be the effect of the watering system itself being unsafe, but there might also have been an issue with a humidity sensor bought separately, or with the weather data supplied by another provider.

‘Openness’Footnote 45 means the fact that components are not static but dynamic and are subject to frequent or even continuous change. Products change their safety-relevant features after the product has been put into circulation, for example through the online provision of updates as well as through a variety of different data feeds and cloud-based digital services. This, in fact, means that a victim may not get compensation under liability regimes such as the PLD which exclusively refer to the point in time when a product was first put into circulation.Footnote 46

Connectivity also gives rise to increased ‘vulnerability’,Footnote 47 due to cyber security risks and privacy risks as well as a number of related risks, such as risks of fraud. However, as has been demonstrated by the short survey of existing liability regimes, such risks are not necessarily covered by liability because of a general focus on risks of a ‘physical’ nature such as death, personal injury, or property damage.

(ii) ‘Autonomy’ and ‘Opacity’

AI adds further challenges to an already challenging picture through the features of ‘autonomy’ and ‘opacity’. The term ‘autonomy’, whose use with regard to machines has often been criticised because of its inextricable link with the free human will, refers to a certain lack of predictability as far as the reaction of the software to unseen instances is concerned. It is in particular when coding of the software has occurred wholly or partially with the help of machine learningFootnote 48 that it is difficult to predict how the software will react to each and every situation in the future.Footnote 49

While unpredicted behaviour in new situations nobody had ever thought about may also occur with software of a traditional kind, algorithms created with the help of machine learning cannot easily be analysed, especially not when sophisticated methods of deep learning have been used. This ‘opacity’ of the codeFootnote 50 (‘black box effect’) means that it is not easy to explain why an AI behaved in a particular manner in a given situation, and even less easy to trace that behaviour back to any feature which could be called a ‘defect’ of the code or to any shortcoming in the development process.

Both autonomy and opacity make it difficult to trace harm back to any kind of intent or negligence on the part of a human actor, which is why fault liability is not an ideal response to risks posed by AI. However, it is also clear that emerging digital technologies, notably AI, make it increasingly difficult to identify a defect due to the autonomy of software and software-driven devices as well as the opacity of the code, which means that defect liability may not be a wholly satisfactory response either.

(iii) Strict and Vicarious Liability as Possible Responses

As the ‘autonomy’ and ‘opacity’ of AI may give rise to exactly the kind of difficulties strict liability is designed to overcome,Footnote 51 the further extension of strict liability to AI applications is increasingly being discussed. This would, at the same time, solve some of the problems associated with ‘complexity’, ‘openness’, and ‘vulnerability’ that come with the IoT. For instance, where it is unclear whether the flooding of the premises was due to a defect of the watering system itself, a humidity sensor, or a data feed, it is still clear that the water itself came from the pipes. Thus, if the legislator introduced strict liability for smart watering systems, this would mean that whoever is the addressee of this strict liability (e.g. the operator or the producer of the watering system) would have to compensate victims for harm suffered from water spread by the system. There have been extensive discussions as to who is the right addressee of liability, and as to which types of risks should ultimately be covered.Footnote 52

Similar effects may be achieved by extending vicarious liability to situations where sophisticated machines are used in lieu of human auxiliaries. Otherwise, parties could escape liability by outsourcing a particular task to a machine rather than to a human auxiliary.Footnote 53

For some time, there has been a debate whether to recognise that highly sophisticated robots, and software agents may themselves be the addressees of liability. The idea of ‘electronic personhood’ was fuelled by a 2017 European Parliament resolution,Footnote 54 but the proposal was met with a great deal of resistance since.Footnote 55 Some of the resistance had its roots in ethical considerations,Footnote 56 but there are also practical flaws. Being the addressee of liability, AI systems would have to be equipped with funds or with equivalent insurance, which means that electronic personhood is more an additional complication than a solution.Footnote 57 Another radical solution proposed is that of replacing liability schemes altogether by insurance or funds so that those suffering harm from AI would be compensated by a general compensation scheme to which, in particular, producers and maybe professional users would be contributing.Footnote 58 However, it is meanwhile broadly accepted that such schemes could realistically only be implemented for very particular applications and fields, such as connected driving, but not across the board for a general purpose technology such as AI.Footnote 59

b. Liability for the Materialisation of Fundamental Rights Risks

The main challenge to existing liability schemes is the fact that they are entirely inadequate to address the challenges posed by AI, due to their focus on safety risks. Where fundamental rights risks posed by AI materialise, there is often no fault on the part of those deploying the AI, and it may be close to impossible for a victim to prove that there was fault on the part of the producer. Defect liability, at least as it currently exists under the PLD and under national legal regimes, is entirely focussed on traditional safety risks. This holds true to an even greater extent for strict liability, which, for the time being, is almost exclusively restricted to physical risks. Further, extending vicarious liability to situations where sophisticated machines are deployed in lieu of human auxiliariesFootnote 60 may help also with regard to fundamental rights risks, as long as there is a basis for liability of the hypothetical human auxiliary. Non-compliance liability might possibly be an option, but beyond non-discrimination law, the GDPR, and unfair commercial practices law there is currently not much of a general compliance regime that could serve as a ‘backbone’ for AI liability. Of course, this ‘backbone’ could theoretically be created by the emerging AI safety legislation. This is why it is essential to analyse this legislation.

IV. The Emerging Landscape of AI Safety Legislation

While the debate on challenges posed by AI to existing liability regimes is still ongoing, the landscape of AI-relevant product safety law is already changing rapidly, as illustrated by the proposals for a new Machinery Regulation and for the AIA. It is important to understand the emerging safety regimes, because it is only against their background that liability regimes specifically tailored to AI can be properly designed.

1. The Proposed Machinery Regulation
a. General Aims and Objectives

The proposed Machinery Regulation aims at modernising the existing machinery safety regime harmonised by the Machinery Directive,Footnote 61 in particular with regard to new technologies. This concerns potential risks that originate from a direct human-robot collaboration, risks originating from connected machinery, the phenomenon that software updates affect the ‘behaviour’ of the machinery after its placing on the market, and the problems associated with risk assessment on machine learning applications before the product is placed on the market. Also, the current regime harmonised by the Machinery Directive still foresees a driver or an operator responsible for the movement of a machine, but fails to set up requirements for autonomous machines. Needless to say, there were also developments to consider and inconsistencies to fix that were not directly related to software and AI. The current list of high-risk machines in Annex I to the Directive was elaborated 15 years ago and is urgently in need of an update.

b. Qualification As High-Risk Machinery

Within the product safety framework for machinery, the qualification of machinery products as high-risk machinery plays an important role. Amongst others, in Annex I, all software ensuring safety functions, including AI systems, and all machinery embedding AI systems ensuring safety functions has been added to the list of high-risk machinery.Footnote 62 The fact that all safety components that are software components, and all machinery embedding AI for the purpose of ensuring safety functions, are now included in the list of high-risk machinery automatically means under the proposed Machinery Regulation that, for this kind of machinery, only third party certification will be accepted, even when manufacturers apply the relevant harmonised standards.

A machinery product is included in the list of high-risk machinery products if it poses a particular risk to human health. The notion of ‘safety’ therefore seems to refer exclusively to risks of a physical nature. The risk posed by a certain machinery product is, according to Article 5(3) of the Proposal, established based on the combination of the probability of occurrence of harm and the severity of that harm. Factors to be considered in determining the probability and severity of harm include the degree to which each affected person would be impacted by the harm, the number of persons potentially affected, the degree of reversibility of the harm, and indications of harm that have been caused in the past by machinery products which have been used for relevant purposes. However, there are also factors that go more in the direction of ‘fundamental rights risks’, such as the degree to which potentially affected parties are dependent on the outcome produced by the machinery product, and the degree to which potentially affected parties are in a vulnerable position vis-à-vis the user of the machinery product.

c. Essential Health and Safety Requirements

The essential health and safety requirements that must be met for conformity of high-risk machinery are listed in Annex III. Where machinery uses AI for safety functions, the conformity assessment must consider hazards that may be generated during the lifecycle of the machinery as an intended evolution of its fully or partially evolving behaviour or logic.Footnote 63 As far as human-machine collaboration is concerned, a machinery product with fully or partially evolving behaviour or logic that is designed to operate with varying levels of autonomy must be adapted to respond to people adequately and appropriately; this must occur verbally through words or nonverbally through gestures, facial expressions, or body movement. It must also communicate its planned actions (what it is going to do and why) to operators in a comprehensible manner.Footnote 64

Largely, however, AI-specific aspects are referred to in the future AIA, that is, where the machinery product integrates an AI system, the machinery risk assessment must consider the risk assessment for that AI system that has been carried out pursuant to the AIA.Footnote 65

2. The Proposed Artificial Intelligence Act
a. General Aims and Objectives

The AIA Proposal of 21 April 2021 aims at ensuring that AI systems placed on the Union market and used in the Union are safe and respect existing law on fundamental rights and Union values, and at enhancing governance and effective enforcement of existing law on fundamental rights and safety requirements applicable to AI systems. At the same time, efforts are being made to ensure legal certainty in order to facilitate investment and innovation in AI and to facilitate the development of a single market for AI applications and prevent market fragmentation. The AIA is complementary to existing data protection law (in particular the GDPR and the Law Enforcement DirectiveFootnote 66), non-discrimination law, and consumer protection law.

As regards high-risk AI systems, which are safety components of products, the AIA will be integrated into the existing and future product safety legislation. For high-risk AI systems related to products covered by the New Legislative Framework (NLF) legislation (e.g. machinery, medical devices, toys), the requirements for AI systems set out in the AIA will be checked as part of the existing conformity assessment procedures under the relevant NLF legislation.Footnote 67 The latter may, at the same time, include further AI-specific requirements relevant only in a particular sector. AI systems related to products covered by relevant ‘old approach’ legislation (e.g. aviation, motor vehicles)Footnote 68 are not directly covered by the AIA, though.Footnote 69

b. The Risk-Based Approach

The AIA Proposal follows a risk-based approach, differentiating between uses of AI that create an unacceptable risk, a high risk, a limited risk, and a low or minimal risk.

(i) Prohibited AI Practices

Title II lists some narrowly defined AI systems whose use is considered unacceptable as contravening EU values and violating fundamental rights, such as manipulation through subliminal techniques or exploitation of group-specific vulnerabilities (e.g. children) in a manner that is likely to cause affected persons psychological or physical harm. The Proposal also prohibits general-purpose social scoring by public authorities and, subject to a range of exceptions, the use of ‘real time’ remote biometric identification systems in publicly accessible spaces for law enforcement purposes.Footnote 70

(ii) High-Risk AI Systems

Title III contains mandatory essential requirements for AI systems qualified as ‘high-risk’ AI systems, defined as systems that create a high risk to the health and safety or fundamental rights of natural persons. There are two main categories of high-risk AI systems: AI systems used as a safety component of products that are subject to third party ex ante conformity assessment under NLF legislation listed in Annex II; and other stand-alone AI systems explicitly listed in Annex III. The systems listed in Annex III, as it currently stands, more or less exclusively address fundamental rights risks. This includes biometric identification and categorisation of natural persons; education and vocational training; employment; workers management and access to self-employment; access to, and enjoyment of, essential private services, public services, and benefits; law enforcement; migration, asylum and border control management; and administration of justice and democratic processes. The only exception is the ‘management and operation of critical infrastructure’Footnote 71 as the latter poses a systemic risk of a more physical nature rather than a fundamental rights risk.

The Commission may, from time to time, expand the list of high-risk AI systems used within certain pre-defined areas, by applying a set of criteria and risk assessment methodology. The risk assessment criteria listed in Article 7(2) are similar to those listed in the relevant Article of the proposed Machinery Regulation,Footnote 72 with two main exceptions: Reference is not only made to risks for the health of persons, but also to risks for the ‘health and safety or … fundamental rights’. Also, an additional criterion to consider is the extent to which existing Union legislation already provides for effective measures of redress in relation to the risks posed by an AI system (with the exclusion of claims for damages) and the existence of effective measures to prevent or substantially minimise those risks. For the purpose of future classification of additional AI systems as ‘high-risk’ systems, safety risks and fundamental rights risks are treated in the same manner and are not dealt with separately.

(iii) AI Systems Subject to Specific Transparency Obligations

Title IV is devoted to AI systems that are subject to enhanced transparency obligations. This concerns, for example, AI systems that may be mistaken for human actors, deep fakes, emotion recognition systems, and biometric categorisation systems.Footnote 73 It is important to note, though, that Titles III and IV are not mutually exclusive, i.e. an AI system that qualifies as a ‘high-risk’ system for the purpose of Title III may still fall under IV as well.

c. Legal Requirements and Conformity Assessment for High-Risk AI Systems

Legal requirements set out in Title III for high-risk AI systems address data and data governance, documentation and record keeping, transparency and provision of information to users, human oversight, robustness, accuracy, and security. By and large, and with regard to the AI system, the same requirements apply irrespective of whether what is at stake is the safety component of a toy robot or a connected household device falling under the RED, or an AI system intended to be used for the selection and evaluation of applicants in the course of a recruitment procedure. This may not be particularly convincing, because the safety requirements with regard to the toy robot or the connected household device are very different to the safety requirements with regard to the recruitment software. However, due to the general nature of the requirements and obligations listed in the Proposal, it may still be the better choice to deal with the two risk categories under identical provisions.

Obligations with regard to these requirements are largely placed on producers (called ‘providers’) of high-risk AI systems, but proportionate obligations are also placed on (professional) users and other participants across the AI value chain (such as importers, distributors, and authorised representatives) consistent with other modern product safety legislation. The Proposal sets out a framework for notified bodies to be involved as independent third parties in conformity assessment procedures. AI systems used as safety components of products regulated under the NLF, such as machinery or toys, are subject to the same compliance and enforcement mechanisms of the products of which they are a component, but in the course of applying these mechanisms the requirements imposed by the AIA must be ensured as well. New ex ante re-assessments of the conformity will be needed in case of substantial modifications to the AI systems.

As regards stand-alone high-risk AI systems, which are currently not covered by product safety legislation, a new compliance and enforcement mechanism is established along the lines of existing NLF legislation. However, with the exception of remote biometric identification systems, such high-risk AI systems are only subject to self-assessment of conformity by the providers. The justification provided in the explanatory notesFootnote 74 is that the combination with strong ex post enforcement would be an effective and reasonable solution, given the early phase of the regulatory intervention and the fact the AI sector is very innovative and expertise for auditing is only now being accumulated.Footnote 75

V. The Emerging Landscape of AI Liability Legislation

While Commission proposals on AI liability, which were initially planned for the first quarter of 2022, have meanwhile been postponed to the third quarter of 2022, a draft Regulation by the European Parliament has been on the table since October 2020.Footnote 76 It was prepared in parallel with the Commission’s White Paper on AI and the preparatory work for the AIA Proposal and has clearly been influenced by work at Commission level.

1. The European Parliament’s Proposal for a Regulation on AI Liability

The cornerstone of the EP Proposal for the regulation of AI liability is a strict liability regime for the operators of ‘high-risk’ AI systems enumeratively listed in an Annex, accompanied by an enhanced regime of fault liability for the operators of other AI systems.

a. Strict Operator Liability for High-Risk AI Systems

According to Article 4 of the EP Proposal, operators of AI systems shall be strictly liable for any harm or damage that was caused by a physical or virtual activity, device, or process driven by an AI system. The EP Proposal ultimately adopted the division into ‘frontend operator’ (i.e. the person deploying the AI system) and ‘backend operator’ (i.e. the person that continuously controls safety-relevant features of the AI system, such as by providing updates or cloud services) that had been developed by the author of this paper and included in the 2019 EG-NTF report.Footnote 77 According to the final version of the EP Proposal, not only the frontend operator, but also the backend operator may become strictly liable. However, the backend operator’s liability is covered only if it is not already covered by the PLD.Footnote 78 The only defence available to the operator is force majeure.Footnote 79 For the AI systems subject to strict liability, mandatory insurance is being proposed.Footnote 80

‘High-risk’ AI systems for the purpose of the proposed Regulation are to be exhaustively listed in an Annex. Interestingly, the final version of the Proposal was published with the Annex left blank. The Annex attached to the first published draft from April 2020 had met with heavy resistance due to its many inconsistencies, and it may have proved too difficult to agree on a better version. Also, it seemed opportune to wait for the list of ‘high-risk’ AI applications that would be attached to the AIA. In any case, given the rapid technological developments and the required technical expertise, the idea is that the Commission should review the Annex without undue delay, but at least every six months, and if necessary, amend it through a delegated act.Footnote 81

b. Enhanced Fault Liability for Other AI Systems

The EP Proposal does not only include a strict liability regime for ‘high-risk’ applications, but also a harmonised regime of rather strictish fault liability for all other AI systems. Article 8 provides for fault-based liability for ‘any harm or damage that was caused by a physical or virtual activity, device or process driven by the AI-system’, and fault is presumed (i.e. it is for the operator to show that the harm or damage was caused without his or her fault).Footnote 82 In doing so, the operator may rely on either of the following grounds: The first ground is that the AI-system was activated without his or her knowledge while all reasonable and necessary measures to avoid such activation outside of the operator’s control were taken. The second ground is that due diligence was observed by performing all the following actions: selecting a suitable AI-system for the right task and skills, putting the AI-system duly into operation, monitoring the activities, and maintaining the operational reliability by regularly installing all available updates. It looks as if these two grounds are the only grounds by means of which operators can exonerate themselves, but Recital 18 also allows for a different interpretation, namely, that the two options listed in Article 8(2) should just facilitate exoneration by establishing ‘counter-presumptions’.

The proposed fault liability regime is problematic not only because of the lack of clarity in drafting, but also because Article 8(2)(b) might be unreasonably strict, as it seems that the operator must demonstrate due diligence in all aspects mentioned, even if it is clear that lack of an update cannot have caused the damage. More importantly, in the absence of any restriction to professional operators, even consumers would face this type of enhanced liability for any kind of AI device, from a smart lawnmower to a smart kitchen stove. This would mean burdening consumers with obligations to ensure that updates are properly installed, irrespective of their concrete digital skills, and possibly confronting them with liability risks they would hardly ever have had to bear under national legal systems.

c. Liability for Physical and Certain Immaterial Harm

Article 2(1) of the Proposal declares the proposed Regulation to apply where an AI system has caused ‘harm or damage to the life, health, physical integrity of a natural person, to the property of a natural or legal person or has caused significant immaterial harm resulting in a verifiable economic loss’. Article 3(i) provides for a corresponding definition of ‘harm or damage’. While life, health, physical integrity, and property were clearly to be expected in such a legislative framework, the inclusion of ‘significant immaterial harm resulting in a verifiable economic loss’ came as a surprise. If immaterial harm or the economic consequences resulting from it – such as loss of earnings due to stress and anxiety that do not qualify as a recognised illness – is compensated through a strict liability regime whose only threshold is causation,Footnote 83 the situations where compensation is due are potentially endless and difficult to cover by way of insurance.Footnote 84

This is so because there is no general duty not to cause significant immaterial harm of any kind to others, unless it is caused by way of non-compliant conduct (such as by infringing the law or by intentionally acting in a way that is incompatible with public policy). For instance, where AI used for recruitment procedures leads to a recommendation not to employ a particular candidate, and if that candidate, therefore, suffers economic loss by not receiving the job offer, full compensation under the EP Proposal for a Regulation would be due even if the recommendation was absolutely well-founded and if there was no discrimination or other objectionable element involved. While some passages of the report seem to choose somewhat more cautious formulations, calling upon the Commission to conduct further research,Footnote 85 Recital 16 explains very firmly that ‘significant immaterial harm’ should be understood as meaning harm as a result of which the affected person suffers considerable detriment, an objective and demonstrable impairment of his or her personal interests and an economic loss calculated having regard, for example, to annual average figures of past revenues and other relevant circumstances.

2. Can the EP Proposal be Linked to the AIA Proposal?

The 2020 White Paper on AI, the EP’s 2020 Proposal for an AI Liability Regulation, and the 2021 Commission Proposals for an AIA and for a new Machinery Regulation clearly have a number of parallels. They range from some identical terminology (e.g. ‘AI system’, ‘high-risk’) to the legislative technique of exhaustively listing ‘high-risk’ AI systems in an Annex, combined with the option for the European Commission to amend the Annex in a rather flexible procedure through delegated acts. So the question arises whether it would be possible to link an AI liability regime along the lines of the EP Proposal with the AIA Proposal in a way that the legal requirements and obligations perspective matches the liability perspective.

a. Can an AI Liability Regulation Refer to the AIA List of ‘High-Risk’ Systems?

The first question that arises is whether the list of ‘high-risk’ AI systems in the AI Liability Regulation can be identical to the list of ‘high-risk’ AI systems under the AIA. However, as tempting as it may be to simply refer to the AIA, it would lead to overreaching and inappropriate results. The justification for imposing strict liability that the relevant product or activity leads to significant and/or frequent harm despite the absence of any fault or any identifiable defect, mal-performance, or non-compliance does not coincide with the justification for imposing particular precautionary measures against unsafe products. While the AI systems for which strict liability is justified will most likely be a subset of the AI systems for which enhanced safety measures are justified, by far not all AI systems of the latter type should be included in a strict liability regime, for example, when they are normally safe except when clearly defective. This is underlined by the fact that the relevant players are not identical. While safety requirements are primarily addressed at the level of producers (‘providers’ in the AIA terminology), the EP Proposal suggests imposing strict AI liability primarily on the frontend operators (‘users’ in the AIA terminology), but also on the backend operators (a concept missing in the AIA). So even if something along the lines of the EP Proposal became the law it would be imperative to draft a liability-specific Annex defining ‘high-risk’ AI systems specifically for liability purposes. This could, for example, include big AI-driven cleaning or lawnmower robots used in public spaces, but not a small vacuum cleaner or toy robot.

b. Can the AIA Keep Liability for Immaterial Harm within Reasonable Boundaries?

As concerns fundamental rights risks, the current approach taken by the EP Proposal, which considers strict liability (alongside fault liability) for ‘significant immaterial harm that results in a verifiable economic loss’, has already been discarded earlier in this chapterFootnote 86 because of its failure to keep liability within any reasonable boundaries. However, the question arises whether the AIA Proposal can now assist in solving this problem.

One way of attaching liability immediately to the AIA Proposal seems to be attaching liability to the engagement in any prohibited AI practice within the meaning of Title II of the AIA Proposal, which could lead to the compensation of both material and immaterial harm thereby caused. This would be a model of non-compliance liability and fit easily into existing non-discrimination, data protection, and consumer protection legislation, all of which provide for liability for damages where harm has been caused by the engagement in prohibited practices.

Another option would be to restrict liability for immaterial harm to cases of non-conformity with the legal requirements in Title III Chapter 2 of the AIA. For instance, where training, validation, or testing data for recruitment AI fail to be relevant, representative, free of errors, and complete, as required by Article 10(4) of the AIA Proposal, the provider could be liable if an applicant was falsely filtered out by the system despite being objectively better qualified. However, it soon transpires that the legal requirements included in Title III Chapter 2 of the AIA Proposal are not optimally suited as a basis for defect liability. For many of the requirements are not so much ends in themselves that would automatically mean an AI system violates fundamental rights. Rather, some of them resemble due diligence standards that must be met during AI development, either as a quality-enhancing measure (e.g. data governance) or to facilitate monitoring (e.g. record-keeping). Non-conformity with such requirements could, therefore, justify a shift of the burden of proof, but should not in itself trigger liability. Thus, in the case of the recruitment AI system, non-conformity of training data with Article 10 should not lead to a final determination of liability but rather to the presumption that the resulting AI was defective.

VI. Possible Pillars of Future AI Liability Law

If the AIA Proposal as it currently stands is not optimally suited for functioning as a ‘backbone’ for AI liability, this does not mean that the AIA as such cannot fulfil this function. Upon a closer look, not much would have to be changed in the AIA to make it an appropriate basis for future legal regimes on AI liability. At the end of the day, liability for damages caused by AI systems may have to rest on different pillars, all of which would have to rely on, or at least be aligned with, provisions in the AIA and further product safety and other law.

1. Product Liability for AI

The first obvious link between the AIA (and other product safety law) on the one hand and liability law on the other could be established within product liability law, which relies on the PLD. Meanwhile, it is widely accepted that the PLD must in any case be adapted to the challenges of digital ecosystems at large.Footnote 87

a. Traditional Safety Risks

With regard to the reform of the PLD, the debate has so far been focused entirely on safety risks. Already with regard to these risks, the PLD as it currently stands is not fit to meet the challenges posed by digitalisation, not least in the light of uncertainties with regard to its scope (e.g. concerning self-standing software, including AI) and its focus on the point in time when a product is put into circulation, which fails to take into account updates, data feeds, and machine learning.Footnote 88 Where AI is involved, a victim may face particular difficulties showing that the AI system was defective. This is why no defect of the AI should have to be established by the victim for AI-specific harm caused by AI-driven products. Rather, it should be sufficient for the victim to prove that the harm was caused by an incident that might have something specifically to do with the AI (e.g. the cleaning robot making a sudden move in the direction of the victim) as contrasted with other incidents (e.g. the victim stumbling over the powered-off cleaning robot).Footnote 89

b. Product Liability for Products Falling Short of ‘Fundamental Rights Safety’?

As has been pointed out, the AIA Proposal also addresses fundamental rights risks. This raises the question whether also product liability might, in the future, include liability for products with a ‘fundamental rights defect’ or falling short of ‘fundamental rights safety’.

The legal requirements described in Title III Chapter 2 of the AIA Proposal address some cloudy notion of ‘adverse impact on the fundamental rights’ of persons, including non-discrimination and gender equality, data protection and privacy, and the rights of the child. However, they fail to state – either in a positive or in a negative manner – what exactly the legal requirements are designed to achieve or to prevent. It is rather obvious that discrimination as far as prohibited by EU non-discrimination law, or data processing as far as prohibited by EU data protection law, is among the core effects to be prevented. However, given the much more ‘fuzzy’ nature of fundamental rights risks as compared with traditional safety risks, and given that there is a floating spectrum of beneficial or adverse impact on a broad variety of different fundamental rights, it is very difficult to impose liability for the materialisation of fundamental rights risks as such.

In order to achieve liability for the materialisation of fundamental rights risks as such, the first step must be to formulate an equivalent to the established concept of ‘safety’ in traditional product safety legislation. As far as traditional safety risks are concerned, it is possible for Article 6(1) of the PLD to simply state: ‘A product is defective when it does not provide the safety which a person is entitled to expect, taking all circumstances into account […]’, implicitly referring to the bulk of existing product safety law that is designed to protect ‘the safety and health of persons’ and similar traditional notions of safety. A corresponding concept of ‘fundamental rights safety’ could theoretically be derived from the AIA, in particular from the requirements for high-risk AI systems listed in Chapter 2 of Title III of the current proposal. However, in order to make these requirements operational for purposes of liability law they would have to be divided into two groups. Requirements which constitute ‘AI-specific safety’ (which would, by and large, be the requirements listed in Articles 13 through 15 of the draft AIA) would have to be seen as clearly separated from the requirements that are about managing safety (mostly Article 9), increasing the likelihood of safety (selected aspects of which are listed in Article 10), or documenting safety (Articles 11 and 12). Shortcomings in the technical documentation or in logging capabilities, for instance, should not be seen as a lack of 'fundamental rights safety' as such, but should rather trigger proof-related consequences in the liability context. Where technical documentation or logging capabilities are missing, or where the producer withholds logging data that would be available and potentially relevant, there could be a presumption that the missing information would have been to the detriment of the producer. Where, on the other hand, an AI system is not as accurate and robust as stated in its description or as could reasonably be expected from an AI system of the relevant kind, and therefore harm occurs (e.g. recruitment software assessing candidates has a strong gender bias and therefore female applicants are discriminated against), this lack of accuracy or robustness might trigger liability of the provider under an extended scheme of product liability. Designing such an extended scheme of product liability would, without doubt, remain to be challenging.

2. Strict Operator Liability for ‘High-Physical-Risk’ Devices

As far as death, personal injury, or property damage caused by a ‘high-risk’ product that includes AI for safety-relevant functions is concerned, strict liability seems to be a proper response. Again, the question arises whether the AIA can be made operational for the purposes of liability law.

a. Why AI Liability Law Needs to be More Selective than AI Safety Law

As has already been pointed out,Footnote 90 not every product that qualifies as a ‘high-risk’ product under the AIA fulfils the requirements that should be met for justifying strict liability (and the accompanying burden of insurance). For instance, a small robot vacuum cleaner may, under the future Machinery Regulation (if the current draft were enacted as is), be automatically classified as ‘high-risk’ and be subject to third party conformity assessment. It would, therefore, at least if the AI component fulfils a safety function, automatically be classified as a ‘high-risk’ AI system also under AIA. Similarly, a toy robot vehicle for children using AI for a safety function would be qualified as ‘high-risk’ under the AIA in cases where that toy is subject to third party conformity assessment,Footnote 91 (e.g. in any case where no harmonised standards exist that cover all safety requirements, or the producer has deviated from the standard).Footnote 92

However, it would arguably be exaggerated to impose strict liability for harm caused by small toy robots or robot vacuum cleaners, in particular if that strict liability is imposed on operators. Those machines hardly ever cause significant physical harm by themselves, and if they do, it is usually because it was improper for the (frontend) operator to deploy them in the particular situation, such as where the operator of a retirement home uses an unsupervised cleaning robot in places and at times when elderly residents might stumble over it. Another possibility is that the machine is defective, for example, the vacuum cleaner, which is normally only used during the night in areas that are locked for residents, suddenly breaks loose and starts hovering when elderly residents are leaving the dining room. The problem is not so much that it would be inappropriate in the case of the retirement home to make its operator strictly liable for damage caused by the cleaning robot. Rather, the problem is that if all operators of small vacuum cleaner robots (including the millions of businesses that use them for cleaning their office space during the night, or even consumers) had to face strict liability and had to take out corresponding insurance, this would be extremely inefficient and benefit no one but the insurance industry.

b. Differentiating ‘High-Risk’ and ‘High-Physical-Risk-As-Such’

The AIA could, therefore, be made fully operational as a ‘backbone’ to AI liability law if its Article 6 with Annex II drew a distinction between AI systems that are – for whatever inner logic the relevant sectoral NLF product safety legislation may follow – subject to third party conformity assessment, and AI systems that create a high physical risk as such. Needless to say, the two groups would not be mutually exclusive, as AI systems that create a high physical risk as such will often be subject to third party conformity assessments under the relevant product safety law. On the other hand, it will often be AI systems governed by ‘old approach’ legislationFootnote 93 that pose a high physical risk to the safety of persons as such. This means that the AIA could provide a better basis for AI liability law if these two groups of AI systems could be separated and better differentiated, either by way of restructuring and slightly redrafting Article 6 and Annex II or by drawing that distinction in a separate legal instrument on AI liability.

c. Avoiding Inconsistencies with Regard to Human-Driven Devices

However, it should also be borne in mind that strict liability for physical risks caused by AI-driven devices might create significant inconsistencies if not accompanied by strict liability for the same type of devices where those devices are not AI-driven but steered by humans or by technology other than AI. A victim run over by a vehicle does not care that much whether the vehicle was AI-driven or not. So if strict liability is found to be appropriate for a particular type of device of a certain minimum weight running at a certain minimum speed in public spaces (or other spaces where they typically get into contact with persons involved with the operation), this will normally be the case irrespective of whether the device is human-driven or AI-driven. For instance, large cleaning machines, lawnmowers, or delivery vehicles in public spaces might generally have to be included in strict liability regimes even where, in the relevant jurisdiction, this is so far not the case. So a strict liability regime should, at the end of the day, not be restricted to AI systems.

3. Vicarious Operator Liability

Vicarious liability in the sense of liability for the acts and omissions of others, such as (human) auxiliaries, might be yet another pillar of future AI liability.

a. The ‘Accountability Gap’ that Exists in a Variety of Contexts

Part of the problem with existing liability regimes in Member States is associated with the absence, in most legal systems, of vicarious liability for the mal-functioning of machines. Where a human cleaner knocks over a person passing by, or where a human bank clerk miscalculates a customer's credit score, there is usually fault liability of either the human auxiliary that was acting, or their employer, or both. Where, however, the person passing by is knocked over by a cleaning robot, or the credit score miscalculated by credit scoring AI, it is well possible that no one is liable at all. The AI system itself cannot be liable, but its operator may not be liable either if that operator can demonstrate that they have bought the AI system from a recognised provider and complied with all monitoring and similar duties. The producer will often not be liable as a defect in the AI system is sometimes difficult to prove, and in any case product liability (unless it will be significantly extended) only covers personal injury and property damage.

Vicarious liability would be a solution, but the rules on liability for acts or omissions of others differ vastly across the Member States and some courts insist that this kind of liability remains restricted to human auxiliaries.Footnote 94 Due to the fact that the application of vicarious liability, either directly or by analogy, is uncertain, an ‘accountability gap’ may exist, as very harmful activities could be conducted without anyone taking responsibility. This concerns both contexts where fault liability would normally apply and contexts where there would be non-compliance liability, and possibly other contexts.

b. Statutory or Contractual Duty on the Part of the Principal

Vicarious AI liability can only go as far as the operator of the AI would itself be liable, under national law, for violation of the same standard of conduct. This means that there must exist some statutory or contractual duty, in particular a duty of diligence, on the part of the operator. Such duties may exist in a variety of contexts, from professional care to recruitment to credit scoring to pricing, and vicarious liability may become relevant for a variety of legal frameworks, from traditional areas of tort law to non-discrimination law to data protection law to consumer and competition law.

Such duties could also follow from the AIA. It is, in particular, the engagement in prohibited AI practices that should lead to liability, irrespective of whether the operator was acting intentionally or negligently with regard to the fact that, for example, the AI was exploiting age-specific vulnerabilities. With an associated liability scheme in mind, it becomes even more apparent, though, that the very ‘pointillistic’ style of Title II of the AIA Proposal is a problem and that, if fundamental rights protection is taken seriously, it would have been necessary to have a more complete list of blacklisted AI practices plus ideally a general clause to cover unforeseen cases.

c. A Harmonised Regime of Vicarious Liability

A new European scheme of vicarious liability might restrict itself to ensuring that a principal that employs AI for a sophisticated task faces the same liability under existing Member State law as a principal that employs a human auxiliary.Footnote 95 For example, a professional user of an AI system would be liable for harm caused by any lack of accuracy or other shortcomings in the operation of the system to the same extent as that user would be liable (under the applicable national law) for the acts or omissions of a human employee mandated with the same task as the AI system. Where a human would not have been able to fulfil the same task, such as where the task requires computing capabilities exceeding those of humans, the point of reference for determining the required level of performance would be available comparable technology which the user could be expected to use.Footnote 96

However, the EU legislator could also go one step further and introduce a fully harmonised concept of vicarious liability that does not suffer from the outset from the shortcomings we see in existing national concepts. By and large, this new European scheme of vicarious liability could provide that a business or public authority is liable for damage caused by its human auxiliaries acting within the scope of their functions, or any AI employed by the business or public authority, where these auxiliaries or AI fail to perform – for whatever reason – at the standard that could reasonably be expected from them.Footnote 97 This comes close to strict liability insofar as it requires neither fault nor a defect (or general lack of reliability in the case of human auxiliaries), but some output that does not meet the standards of conduct to be expected from a business or public authority in the fulfilment of their functions. What this level of quality is, depends on the task to be fulfilled. For instance, if it is about assessing the creditworthiness of a customer seeking credit, it would be the duty to provide proper assessment along the lines of any criteria prescribed by the law or stated by the business, and if it is about assessing candidates for a vacant position, it is again about assessing them properly, without any prohibited discrimination and duly taking into account the qualifications required for the position. Vicarious liability would, in any case, cover both safety risks and fundamental rights risks.

4. Non-Compliance and Fault Liability

Last but certainly not least, non-compliance and fault liability can also play an important role in the future landscape of liability for AI. In very much the same manner as Article 82 of the GDPR provides for liability of a controller or processor where that controller or processor violates their obligations under the GDPR, there could be liability under the AIA, or in a separate piece of legislation, where a provider, user or other economic operator covered by the AIA fails to comply with relevant AIA provisions, thereby causing relevant harm. This non-compliance liability might complement general fault liabiity that would continue to co-exist as a general baseline for extra-contractual liability. A breach of a duty of care that would constitute negligence could include deploying AI for a task it was not designed for, failing to provide for appropriate human oversight and other safeguards or failing to provide for necessary long-term monitoring and maintenance. Non-compliance liability and fault liability could also be merged, such as by alleviating the burden of proof for the victim under fault liability, or even reversing that burden, where obligations under the AIA have failed to be complied with.

VII. Conclusions

The potential risks associated with AI appear as normally falling into either of two dimensions: (a) ‘safety risks’ (i.e. death, personal injury, damage to property etc.) caused by unsafe products and activities involving AI and (b) ‘fundamental rights risks’ (i.e. discrimination, total surveillance, manipulation, exploitation, etc.), including risks for society at large, caused by inappropriate decisions made with the help of AI or otherwise inappropriate deployment of AI. While safety risks are highly relevant also in the AI context, fundamental rights risks are much more AI-specific.

Existing extra-contractual liability regimes can essentially be divided into four categories: fault liability, non-compliance liability, defect or mal-performance liability, and strict liability in the narrower sense. Vicarious liability can normally also be analysed as falling into one of these categories. Three out of the four categories of liability regimes are either restricted to, or heavily focused on, traditional safety risks such as death, personal injury, or property damage. It is only non-compliance liability, such as can be found in the GDPR or as an annex to EU non-discrimination law or consumer protection law, that frequently addresses also harm resulting from fundamental rights risks. Despite the fact that fundamental rights risks are more AI specific, liability for such risks seems to be largely unchartered territory, and the debate around liability for AI has largely been restricted to safety risks.

At the level of AI safety law, fundamental rights risks are now being addressed by way of prohibiting certain AI practices and by imposing mandatory legal requirements for other ‘high-risk’ AI systems, such as concerning data and data governance, transparency, and human oversight. While it is not impossible to use the emerging AI safety regime as a ‘backbone’ for the future AI liability regime, the AIA proposal, as it currently stands, is not optimally suited to help address liability for fundamental rights risks.

The future AI liability law could rest on several different pillars, such as: (a) a revised regime of product liability, which might even include liability for lack of ‘fundamental rights safety’; (b) strict operator liability for death, personal injury, property damage, and possibly further safety risks caused by ‘high-physical-risk’ devices; (c) vicarious operator liability for mal-performance of functions carried out in the course of business activities or activities of a public authority; and (d) fault and/or non-compliance liability for the operator's own negligence and/or failure to comply with obligations following from, in particular, the AIA.

While it would be desirable to have an AI safety regime that allows an AI liability regime to dock on, it becomes apparent that the AIA Proposal has, regrettably, not been drafted with liability law in mind. Further negotiations about the AIA Proposal and the preparatory work on a future AI liability regime as well as on a potential revision of the PLD should, for the sake of consistency of Union law and of legal certainty, be more closely aligned.

13 Forward to the Past A Critical Evaluation of the European Approach to Artificial Intelligence in Private International Law

Jan von Hein
I. Introduction

On 2 October 1997, the Member States of the European Union (EU) signed the Treaty of Amsterdam and endowed the European legislature with a competence in the field of private international law that is now found in Article 81(2)(c) of the Treaty on the Functioning of the European Union.Footnote 1 In the following two decades, the EU created an expanding body of private international law.Footnote 2 In particular, the Rome II Regulation on the law applicable to non-contractual obligations was enacted on 11 July 2007.Footnote 3 Only eleven months later, the Rome I Regulation on the law applicable to contractual obligations was adopted.Footnote 4 Although both Regulations are already rather comprehensive, gaps as well as inconsistencies remain.Footnote 5 In light of the rapid technological development since 2009, the issue as to whether there is a need for specific rules on the private international law of artificial intelligence (AI) has to be addressed.Footnote 6 After the European Parliament’s JURI Committee had presented a proposal for a civil liability regime for AI in April 2020,Footnote 7 the European Parliament adopted – with a large margin – a pertinent resolution with recommendations to the Commission on 20 October 2020.Footnote 8 This resolution is part of a larger regulatory package on issues of AI.Footnote 9 The draft regulation (DR) proposed in this resolution is noteworthy not only with regard to the rules on substantive law that it contains,Footnote 10 but also from a choice-of-law perspective because it introduces new, specific conflicts rules for AI-related aspects of civil liability.Footnote 11 In the following contribution, I analyse and evaluate the European Parliament’s proposal against the background of the already existing European regulatory framework on private international law, in particular the Rome I and II Regulations.

II. The Current European Framework
1. The Goals of PIL Harmonisation

The basic economic rationale underlying the Rome II Regulation is succinctly captured in its Recital 6, which reads as follows:

The proper functioning of the internal market creates a need, in order to improve the predictability of the outcome of litigation, certainty as to the law applicable and the free movement of judgments, for the conflict-of-law rules in the Member States to designate the same national law irrespective of the country of the court in which an action is brought.

This Recital epitomises the basic tenet of the methodology developed by Friedrich Carl von Savigny in the nineteenth century, in other words, the goal of international decisional harmony.Footnote 12 The Commission’s explanation for its Rome II draft of 2003 is even more explicit with regard to the deterrence of forum shopping: unless conflicts rules for non-contractual obligations become unified, ‘[t]he risk is that parties will opt for the courts of one Member State rather than another simply because the law applicable in the courts of this State would be more favourable to them.’Footnote 13 The explanation for the draft of 2003 also makes clear that a unification of tort conflicts rests on a sound economic rationale, the reduction of transaction costs borne by the parties. A European Regulation on tort conflicts ‘allows the parties to confine themselves to studying a single set of conflict rules, thus reducing the cost of litigation and boosting the foreseeability of solutions and certainty as to the law.’Footnote 14 This rationale is particularly important for tort conflicts, because, contrary to contract conflicts, a choice of the applicable law ex ante was traditionally not available in many jurisdictions.Footnote 15 Even if the parties enjoy that possibility, they will frequently not be able to exercise this right because they do not anticipate an accident to happen.Footnote 16 Accordingly, clear objective conflicts rules have significantly greater weight in tort than in contract cases.Footnote 17 This is an important factor facilitating the emergence of new technologies with cross-border implications, such as driverless cars.Footnote 18

Moreover, the force of a practical example that would emanate from a successful codification of European conflicts rules on AI must not be underestimated. Although the initial American reaction towards the Rome II Regulation was rather critical, denouncing the final text as a ‘missed opportunity’ to transplant US doctrines to Europe,Footnote 19 there is a palpable transatlantic interest in recent European developments and the lessons that these may hold for the United States.Footnote 20 A well-known American conflicts scholar even recommended the European codification of tort conflicts as a model for further US legislation.Footnote 21 While the ‘end of history’ for private international law (i.e. a full convergence of US and European conflict of laws in torts),Footnote 22 is still a long road ahead, a successful EU legislation on the law applicable to liability issues of AI will certainly increase the prospects for creating harmonised conflicts rules in this area on a global level.

2. The Subject of Liability

Both the Rome I and II Regulations only address the liability of natural personsFootnote 23 and ‘companies and other bodies, corporate or unincorporated’.Footnote 24 Thus, the question arises as to whether an AI system could be classified as another ‘unincorporated body’ within the meaning of these provisions.Footnote 25 There is a parallel discussion about attributing legal personality to AI-systems in substantive private law.Footnote 26 Although the mere wording of the English version of the Rome I and II Regulations would arguably allow such an innovative interpretation, other linguistic versions suggest a narrower, more traditional reading of the Regulations (e.g. the German one, which speaks of ‘Gesellschaften, Vereine und juristische Personen’). Since the law applicable to legal personality is not yet determined by EU private international law, but remains subject to domestic choice-of-law rules within the boundaries of the freedom of establishment,Footnote 27 it would be unwise to burden the Rome I and II Regulations with a regulatory aspect that is, from the point of view of international contract and tort law, merely an incidental question. Thus, the law applicable to legal personality will have to be determined by other measures, e.g. by a regulation based on the draft presented by the European Group for Private International Law in 2016.Footnote 28

3. Non-Contractual Obligations: The Rome II Regulation
a. Scope

The Rome II Regulation determines the law applicable to non-contractual obligations, in particular torts. The notion of ‘non-contractual obligation’ must be interpreted as an autonomous concept.Footnote 29 It covers both strict and fault-based liability.Footnote 30 Generally speaking, all types of harm or damage are covered, such as physical damage to property, pure economic loss, and immaterial harm.Footnote 31 The Rome II Regulation is limited to civil and commercial matters;Footnote 32 notably, it does not cover the liability of the state for acts and omissions in the exercise of state authority.Footnote 33 Thus, the law applicable to a Member State’s liability for the use of AI for the purpose of international police surveillance or military operations, for example, is determined by domestic choice-of-law rules.Footnote 34 Moreover, the Rome II Regulation is not applicable to non-contractual obligations arising out of violations of privacy and rights relating to personality, including defamation.Footnote 35 Therefore, the law applicable to any kind of use of AI that violates a person’s right to privacy or causes damage to their reputation must still be determined by domestic choice-of-law rules, such as Articles 40–42 of the German EGBGB.Footnote 36 Finally, although the rules of the Rome II Regulation are of European origin, they shall be applied whether or not the law specified by them is the law of an EU Member State.Footnote 37 Thus, according to this principle of ‘universal application’, even if an AI system operated by a British company causes damage to a person in Switzerland, the court of an EU Member State will determine the law applicable to such a case pursuant to the Rome II Regulation.Footnote 38

b. The General Rule (Article 4 Rome II)

The basic rule for torts in general is found in Article 4(1) Rome II, which refers to the place of injury. Recital 15 Rome II acknowledges that ‘lex loci delicti is the basic solution for non-contractual obligations in virtually all the Member States’. Nevertheless, the diverging interpretations of this principle by various Member States’ legislatures and courts in complex cases (place of injury, place of acting, or even both under the so-called theory of ubiquity) had in the past led to considerable legal uncertainty.Footnote 39 The preference for the place of injury is justified because, generally speaking, it strikes ‘a fair balance’ between the interest of the person claimed to be liable to foresee the applicable law and the interests of the person sustaining the damage.Footnote 40 From an economic point of view, the place of injury will usually lead to a fair distribution of the costs for obtaining the relevant legal information: In most cases, the person claimed to be liable should be able to anticipate that his or her acts may cause harm in another country, whereas the victim should be able to rely on the legal standard of the environment to which he or she exposed his or her body or property.Footnote 41 While the tortfeasor is thus forced to internalise the costs for negative externalities arising in other countries,Footnote 42 the victim is given the opportunity to structure his or her insurance in accordance with the law to which he or she is presumably accustomed.Footnote 43 Since Article 4(1) Rome II is based on the idea of striking ‘a fair balance’ between the alleged tortfeasor and victim, this neutral provision must not be interpreted in a one-sided fashion that favours the plaintiff. The Rome II Regulation does not, as a general principle, embrace the plaintiff-friendly principle of ubiquity found in German or Italian private international law.Footnote 44

The Rome II Regulation contains a significant number of specific rules for special torts.Footnote 45 This considerably reduces the weight that the general rule has to carry, which applies only ‘unless otherwise provided for in this Regulation’.Footnote 46 The main group of cases of practical importance that are exclusively governed by the general rule instead of specific rules are traffic accidents.Footnote 47 However, even in this regard, the scope of application of Article 4 Rome II is limited in practice. The full communitarisation of private international law is impeded by the fact that there already exist two supranational instruments dealing with important areas of tort conflicts, namely, the Hague Convention on the law applicable to Traffic Accidents (HCTA) and the Hague Convention on the law applicable to Products Liability (HCP).Footnote 48 Both conventions count several EU Member States among their parties.Footnote 49 Those Member States were (and are) unwilling to withdraw from the respective conventions.Footnote 50 Since the EU could arguably not terminate their membership without their consent, rules governing the collision between EU conflicts rules and the Hague conventions had to be invented.Footnote 51 The solution finally codified in the Rome II Regulation provides that the Regulation does not prejudice the application of existing conventions that contain conflicts rules for non-contractual obligations.Footnote 52 The Rome II Regulation takes precedence, however, over conventions concluded exclusively between two or more of them insofar as such conventions concern matters governed by the Regulation.Footnote 53 Since both pertinent Hague conventions have a sizeable number of non-EU state parties, this exception is of little practical use.Footnote 54 Even if a traffic accident is only connected with, for example, France and Germany, French courts have to apply the HCTA, whereas a German court must determine the applicable law under the Rome II Regulation.Footnote 55 Thus, in two of the most important areas of tort conflicts, traffic accidents and product liability, European private international law remains fragmented and continues to offer ample possibilities of forum shopping.Footnote 56 This situation is exacerbated by the fact that the Rome II Regulation excludes the possibility of renvoi.Footnote 57 Thus, cases involving driverless cars, for example, may be subject to different laws in various Member States.Footnote 58

The lex loci damniFootnote 59 is displaced in cases where the person claimed to be liable and the person sustaining the damage both have their habitual residence in the same country at the time when the damage occurs.Footnote 60 This rule had been familiar to many European codifications already before Rome II was enacted.Footnote 61 Again, it is a legitimate expression of the basic economic rationale underlying the Regulation: ‘[I]n most cases the common residence rule guarantees lower litigation costs, more efficient court administration, and international harmony of decisions’.Footnote 62 Usually, parties who share a common habitual residence will litigate in the country where they live; moreover, their insurance coverage will, in most cases, be structured according to the standards prevailing in this country.Footnote 63

Article 4(1) and (2) Rome II are coupled with an escape clause that is meant to provide for a sufficient degree of judicial discretion in the individual case.Footnote 64 The final paragraph, which is rather an open-ended standard than a rule, combines a fairly general approach in its first sentence (manifestly closer connection) with a particular example of such a connection (relationship between the parties, for example, a contract) in its second sentence. As Recital 14 Rome II shows, the drafters of the Regulation were mindful of the tension between ‘the requirement of legal certainty’ on the one hand and the ‘need to do justice in individual cases’ on the other. The Recital explains that

this Regulation provides for a general rule but also for specific rules and, in certain provisions, for an ‘escape clause’ which allows a departure from these rules where it is clear from all the circumstances of the case that the tort/delict is manifestly more closely connected with another country. This set of rules thus creates a flexible framework of conflict-of-law rules. Equally, it enables the court seised to treat individual cases in an appropriate manner.

Finally, Article 14 Rome II provides for a modern and liberal approach to party autonomy for non-contractual obligations, allowing a choice of the applicable law both ex post and, provided certain conditions are met, ex ante.Footnote 65 The reasons for this liberal approach are spelled out in the first sentence of Recital 31: ‘To respect the principle of party autonomy and to enhance legal certainty, the parties should be allowed to make a choice as to the law applicable to a non-contractual obligation.’ Party autonomy enhances legal certainty in two ways.Footnote 66 First, the flexible approach of the Regulation, which is characterised by a rather generous array of escape clauses,Footnote 67 introduces a potential source of litigation that must be balanced by giving parties the possibility of quickly resolving any dispute on the applicable law.Footnote 68 Secondly, the substantive laws of the Member States are characterised by significant divergences as far as the proper boundaries between tort and contract law are concerned. This is particularly true for cases such as pre-contractual liability, liability for pure economic loss, and the protection of third persons who are not a party to an existing contract with the person claimed to be liable.Footnote 69 Thus, parties who want to avoid a protracted litigation on issues of classification are well advised to choose the law applicable not only to their contractual obligations, but also to their non-contractual obligations.Footnote 70

c. The Rule on Product Liability (Article 5 Rome II)

With regard to product liability, Article 5 Rome II strives to create a balance between an effective protection of the victim, who is often a consumer and typically regarded as the weaker party, on the one hand, and the producer’s interest in foreseeability of the applicable law, on the other.Footnote 71

Article 5(1) Rome II presupposes a damage ‘caused by a product’. The notion of ‘product’ must be interpreted autonomously;Footnote 72 the Commission’s Explanatory Memorandum of 2003Footnote 73 refers to the definition found in the EU Directive on Product Liability.Footnote 74 The substantive EU law on product liability so far only applies to physical goods.Footnote 75 Thus, strict liability for data processing cannot be based on the current Product Liability Directive.Footnote 76 A working group hosted by the European Law Institute has recently published a paper on giving the Product Liability Directive a digital ‘update’, but this reform process is still in its first stages.Footnote 77 Although the rules of the current Product Liability Directive may be extended to cover standard software delivered on a DVD, for example,Footnote 78 it is controversial whether software that was designed to meet the specific needs of the customer could be classified as a ‘product’.Footnote 79 Those delineations are generally transferred to Article 5(1) Rome II.Footnote 80 In cases of autonomous driving, however, the software will be sold as an integral part of a car. In cases where software is embedded in a physical good, both the Product Liability Directive and Article 5(1) Rome II apply.Footnote 81

The cascade of connections found in Article 5 Rome II is structured as follows: first, parties may choose the law applicable to product liability claims under the general provision on party autonomy.Footnote 82 Likewise, the Rome II Regulation provides for an accessory connection of product liability claims to a pre-existing relationship, such as a contract, between the parties.Footnote 83 Both steps constitute major improvements compared to the Hague Convention on the law applicable to product liability,Footnote 84 which failed to include such rules.

Secondly, if both parties have their habitual residence in the same country, the law of that state applies.Footnote 85

Thirdly, if none of the above applies, Article 5(1) Rome II basically refers to the law of the state where the product was marketed, provided that the place of marketing coincides with one of three other territorial factors (the victim’s habitual residence, the place where the product was acquired, the place of injury) and that the person claimed to be liable (usually the producer) could reasonably foresee the marketing of the product or a product of the same type in this country. Contrary to specific provisions on product liability, for example in ItalyFootnote 86 or Switzerland,Footnote 87 Article 5(1) Rome II is not an alternative connection, but ranks the connecting factors in a hierarchical order. Firstly, the law applicable is that of the victim’s habitual residence, provided that (1) it coincides with the place of marketing and (2) the producer does not succeed at proving that he could not foresee the marketing of this or a similar product in this country.Footnote 88 If one of those conditions (marketing, foreseeability) is not met, the law of the country in which the product was acquired applies, again subject to a coincidence with the place of marketing and the test of foreseeability.Footnote 89 If the applicable law cannot be determined at this stage, the law of the country in which the ‘damage [read: injury] occurred’, applies, if at least in this country the two additional requirements (marketing, foreseeability) are met.Footnote 90 If all of the three countries enumerated in Article 5(1) Rome II do not pass the test of foreseeability, the applicable law is that of the producer’s habitual residence.

This rather unwieldy ‘cascade system of connecting factors’Footnote 91 fails to achieve wholly convincing results. First, even after the Rome II Regulation has been in force now for more than a decade, it has not induced a single Member State, which is a party to the HCP, to denounce this convention. On the contrary, under Article 28 Rome II, the HCP takes precedence over the Rome II Regulation. The result is that, since 2009, Europeans have two different regimes on product liability conflicts which are both influenced by a similar methodology (grouping of contacts), but which do not yield uniform results in practice.

While Recital 20 explains that the ‘conflict-of-law rule in matters of product liability should meet the objectives of fairly spreading the risks inherent in a modern high-technology society, protecting consumers’ health, stimulating innovation, securing undistorted competition and facilitating trade,’ it must be kept in mind that Article 5(1) Rome II is not limited to business-to-consumer (B2C) cases, but applies to business-to-business (B2B) cases as well.

Since the connecting factor that enjoys primacy in the basic ruleFootnote 92 is relegated to the last rung of the ladder in cases of product liability,Footnote 93 drawing the line between general tortious liability and product liability is decisive in traffic accidents involving autonomous cars.Footnote 94 Thus, one may argue that there is a need for a special conflicts rule for those cases. A further complication arises from the above-mentioned fact that, in quite a number of member states, the law applicable to traffic accidents or product liability is still not determined by the Rome II Regulation, but by the pertinent Hague Conventions of the early 1970s (see Sub-section II.3(b)). Therefore, even an amendment to the Rome II Regulation would not create European legal unity in this regard.

d. Special Rules in EU Law (Article 27 Rome II)

Pursuant to Article 27 Rome II, special EU conflicts rules take precedence over Rome II. In particular, the conflicts rules of the General Data Protection RegulationFootnote 95 may be relevant in cases involving AI.Footnote 96 In the course of the preparation of the Rome II Regulation, industry lobbies argued for codifying the ‘country of origin’-approach as a choice-of-law rule.Footnote 97 While those attempts failed, Article 27 Rome II explicitly states that ‘provisions of Community law which, in relation to particular matters, lay down conflict-of-law rules relating to non-contractual obligations’ take precedence over the Regulation. Moreover, Recital 35 Rome II adds that the Regulation:

should not prejudice the application of other instruments laying down provisions designed to contribute to the proper functioning of the internal market insofar as they cannot be applied in conjunction with the law designated by the rules of this Regulation. The application of provisions of the applicable law designated by the rules of this Regulation should not restrict the free movement of goods and services as regulated by Community instruments, such as … [the] Directive on electronic commerce[Footnote 98].

The precise reach of this exhortation is hard to define because the Directive on electronic commerce itself takes the somewhat schizophrenic position that it does not contain conflict-of-law rules,Footnote 99 while at the same time laying down the country-of-origin principle in its Article 3(1) and (2).Footnote 100 With regard to violations of rights of personality, a field not covered by Rome II, the CJEU tried to clarify matters as follows:Footnote 101

Article 3 of Directive 2000/31/EC of the European Parliament and of the Council of 8 June 2000 on certain legal aspects of information society services, in particular electronic commerce, in the Internal Market (“Directive on electronic commerce”), must be interpreted as not requiring transposition in the form of a specific conflict-of-laws rule. Nevertheless, in relation to the coordinated field, Member States must ensure that, subject to the derogations authorized in accordance with the conditions set out in Article 3(4) of Directive 2000/31, the provider of an electronic commerce service is not made subject to stricter requirements than those provided for by the substantive law applicable in the Member State in which that service provider is established.

If the European legislature were to codify special conflicts rules on AI, such a regulation would not only supersede the Rome II Regulation pursuant to its Article 27, but arguably also take precedence over the Hague Conventions. The respective Articles 15 of the HCTA and the HCP state that the Hague Conventions shall not prevail over other Conventions ‘in special fields’ to which the contracting states are or may become parties. Although an EU Regulation is surely not a ‘convention’ within the technical meaning of those provisions, one may argue that Article 15 HCTA/HCP should apply by way of an analogy to any EU Regulation dealing with the law applicable to autonomous driving, for example.

4. Contractual Obligations: The Rome I Regulation
a. Scope

Complementing Rome II, the Rome I Regulation determines the law applicable to contractual obligations.Footnote 102 Mirroring the Rome II Regulation,Footnote 103 the notion of contractual obligation must be interpreted as an autonomous concept.Footnote 104 Thus, the Rome I Regulation designates the law applicable to so-called smart contracts, for example.Footnote 105 Likewise, the Rome I Regulation is of universal application as well.Footnote 106

b. Choice of Law (Article 3 Rome I)

Party autonomy is largely permitted by Article 3 Rome I.Footnote 107 Consumers, however, must not be deprived of the protection accorded to them by the law of their habitual residence.Footnote 108

c. Objective Rules (Articles 4 to 8 Rome I)

Usually, the habitual residence of the service provider determines the law applicable to contracts for services.Footnote 109 With regard to consumers, the law of the consumer’s habitual residence applies under the conditions set out in Article 6(1) Rome I.Footnote 110

d. Special Rules in EU Law (Article 23 Rome I)

Special conflicts rules in other EU legal instruments prevail over the Rome I Regulation.Footnote 111 There are occasional conflicts rules in older consumer directives;Footnote 112 however, the more recent directive on digital content and services does not contain any such rule.Footnote 113 On the contrary, Recital 80 of said directive explicitly states that ‘[n]othing in this Directive should prejudice the application of the rules of private international law, in particular Regulations (EC) No 593/2008 and (EU) No 1215/2012 of the European Parliament and of the Council’.

III. The Draft Regulation of the European Parliament
1. Territorial Scope

With regard to substantive law, the draft regulation distinguishes between legally defined high-risk AI-systemsFootnote 114 and other AI-systems involving a lower riskFootnote 115. For high-risk AI-systems, the draft regulation would introduce an independent set of substantive rules providing for strict liability of the system’s operator.Footnote 116 Further provisions deal with the amount of compensation,Footnote 117 the extent of compensationFootnote 118 and the limitation period.Footnote 119 The spatial scope of those autonomous rules on strict liability for high-risk AI-systems is determined by Article 2 DR, which reads as follows:

  1. 1. This Regulation applies on the territory of the Union where a physical or virtual activity, device or process driven by an AI-system has caused harm or damage to the life, health, physical integrity of a natural person, to the property of a natural or legal person or has caused significant immaterial harm resulting in a verifiable economic loss.

  2. 2. Any agreement between an operator of an AI-system and a natural or legal person who suffers harm or damage because of the AI-system, which circumvents or limits the rights and obligations set out in this Regulation, concluded before or after the harm or damage occurred, shall be deemed null and void as regards the rights and obligations laid down in this Regulation.

  3. 3. This Regulation is without prejudice to any additional liability claims resulting from contractual relationships, as well as from regulations on product liability, consumer protection, anti-discrimination, labour and environmental protection between the operator and the natural or legal person who suffered harm or damage because of the AI-system and that may be brought against the operator under Union or national law.

The unilateral conflicts rule found in Article 2(1) DR would prevail over the Rome II Regulation on the law applicable to non-contractual relations pursuant to Article 27 Rome II.Footnote 120 However, the Rome II Regulation still applies to additional liability claims mentioned in Article 2(3) DR. Moreover, Article 2(1) DR seems to limit the applicability of the draft regulation to cases where the harm was suffered on the territory of the European Union.Footnote 121 This stands in stark contrast with the principle of universal application that is one of the cornerstones of the Rome II Regulation.Footnote 122 If a high risk AI-system operated in Freiburg, Germany, for example, caused damage in Basel, Switzerland, the preconditions set out in Article 2(1) DR would not be met; thus, one would have to resort to the Rome II Regulation to determine the law applicable to the Swiss victim’s claims.

2. The Law Applicable to High Risk Systems

Furthermore, it must be noted that Article 2(1) DR deviates considerably from the choice-of-law framework of Rome II. While Article 2(1) DR reflects the lex loci damni approach enshrined as the general conflicts rule in the Rome II Regulation,Footnote 123 one must not overlook the fact that product liability is subject to a special conflicts rule, namely Article 5 Rome II, which is considerably friendlier to the victim of a tort than the general conflicts rule.Footnote 124 This cascade of connections is evidently influenced by the desire to protect the mobile consumer from being confronted with a law that may be purely accidental from his point of view. The lex loci damniFootnote 125 may have neither a relationship with the legal environment that consumers are accustomed toFootnote 126 nor with the place where they decided to expose themselves to the danger possibly emanating from the product.Footnote 127 The rule reflects the presumption that a defective product will affect most consumers in the country where they are habitually resident. Insofar, Article 2(1) DR is, in comparison with the Rome II Regulation, friendlier to the operator of a high-risk AI-system than to the consumer.

Even if one limits the comparison between Article 2(1) DR and the Rome II Regulation to the latter’s general rule,Footnote 128 it is striking that the DR does not adopt familiar approaches that allow for deviating from a strict adherence to lex loci damni. Contrary to Article 4(2) Rome II, where the person claimed to be liable and the person sustaining damage both have their habitual residence in the same country at the time when the damage occurs, Article 2 DR does not allow to apply the law of that country. Moreover, an escape clause such as Article 4(3) or Article 5(2) Rome II is missing in Article 2 DR. Finally, yet importantly, Article 2(2) DR bars any party autonomy with regard to strict liability for a high-risk AI-system, which deviates strongly from the liberal approach found in Article 14 Rome II.

3. The Law Applicable to Other Systems

Apart from the operator’s strict liability for high-risk AI-systems, the draft regulation would introduce a fault-based liability rule for other AI-systems.Footnote 129 In principle, the spatial scope of the latter liability rule would also be determined by Article 2 DR as already described.Footnote 130 However, unlike the comprehensive set of rules on strict liability for high-risk systems, the draft regulation’s model of fault-based liability is not completely autonomous. Rather, the latter type of liability contains important carve-outs regarding the amounts and the extent of compensation as well as the statute of limitations. Pursuant to Article 9 DR, those issues are left to the domestic laws of the Member States. More precisely, Article 9 DR states: ‘Civil liability claims brought in accordance with Article 8(1) shall be subject, in relation to limitation periods as well as the amounts and the extent of compensation, to the laws of the Member State in which the harm or damage occurred.’ Thus, we find a lex loci damni approach with regard to fault-based liability as well. Again, the principle of universal applicationFootnote 131 is discarded; contrary to the rules of Rome II, Article 9 DR is a unilateral conflicts rule that only refers to ‘the laws of the Member State in which the harm or damage occurred’. Moreover, all the modern approaches codified in the Rome II Regulation – the cascade of connecting factors for product liability claims, the common habitual residence rule, the escape clause, and party autonomy – are strikingly absent from Article 9 DR as well.

Finally, yet importantly, Article 9 DR leads to a split between the law applicable to the basis of liability, on the one hand, and the law applicable to limitation periods as well as the extent of compensation, on the other. This dépeçage stands in stark contrast with the general scope that Article 15 Rome II assigns to the lex causae. Pursuant to Article 15(a) Rome II, the law applicable to a non-contractual obligation under the Rome II Regulation covers both the basis and the extent of liability.Footnote 132 In addition, Article 15(h) Rome II provides that the law designated by the Rome II Regulation also applies to rules of prescription and limitation.Footnote 133 As Axel Halfmeier explains, ‘the general tendency of the [Rome II] Regulation is to expand the reach of the lex causae and limit the role of the lex fori [because] the goal of the Rome Regulations is to produce harmony in results among the Member States’ courts’Footnote 134 – the classic Savignyan goal of international decisional harmony mentioned above.Footnote 135 Of course, one has to take into account that Article 9 DR does not refer to the lex fori, but to the lex loci damni. Insofar, the rule does not offer any incentive for forum shopping. However, the unitary approach underlying Article 15 Rome II also serves the goal of ‘avoiding the risk that the tort or delict is broken up in to several elements, each subject to a different law’.Footnote 136 Insofar, Article 15 Rome II aims at preventing the ‘legal uncertainty’ associated with applying different laws to a single case.Footnote 137 Particularly with regard to Article 15(h) Rome II, the Court of Justice of the EU (CJEU) ‘pointed out that, in spite of the variety of national rules of prescription and limitation, Article 15(h) of the Rome II Regulation expressly makes such rules subject to the general rule on determining the law applicable’.Footnote 138 Creating a dépeçage between an autonomous rule on the conditions of liability, on the one hand, and the law applicable to the extent of damages and prescription issues, on the other, may lead to difficult questions of characterisation and adaptation. For example, the question may arise which particular rule of prescription of the lex loci damni shall apply if the latter law comprises various types of fault-based liability or calibrates the length of the prescription period depending on the degree of fault. In such a scenario, the court addressed would have to determine which domestic type of liability most closely corresponds to the model found in Article 8 DR – a task that may not be easy to fulfil. With regard to legal policy, it is hardly convincing to subject the issue of prescription to domestic laws because the periods codified in the Member States’ laws have been criticised as being too short in light of the complexities of international cases.Footnote 139

4. Personal Scope

The draft regulation, in principle, limits its personal scope to the liability of the operator alone.Footnote 140 Recital 9 of the resolution explains that the European Parliament

[c]onsiders that the existing fault-based tort law of the Member States offers in most cases a sufficient level of protection for persons that suffer harm caused by an interfering third party like a hacker or for persons whose property is damaged by such a third party, as the interference regularly constitutes a fault-based action; notes that only for specific cases, including those where the third party is untraceable or impecunious, does the addition of liability rules to complement existing national tort law seem necessary.

Thus, for third parties, the conflicts rules of Rome II would continue to apply.

IV. Evaluation

At first impression, it seems rather strange that a regulation on a very modern technology – AI – should deploy a conflicts approach that seems to have more in common with Joseph Beale’s First Restatement of the 1930sFootnote 141 than with the modern and differentiated set of conflicts rules codified by the EU itself at the beginning of the twenty-first century (i.e. the Rome II Regulation). While the European Parliament’s resolution, in its usual introductory part, diligently enumerates all EU regulations and directives dealing with substantive issues of liability, the Rome II Regulation is not mentioned once in the Recitals. One wonders whether the members of Parliament were aware of the European Union’s acquis in the field of private international law at all.

V. Summary and Outlook

In April 2020, the JURI Committee of the European Parliament presented a draft report with recommendations to the Commission on a civil liability regime for AI (see Sub-section I). The draft regulation proposed therein is noteworthy from a private international law perspective because it introduces new conflicts rules for AI. In this regard, the proposed regulation distinguishes between a rule delineating the spatial scope of its autonomous rules on strict liability for high-risk AI systems (Article 2 DR), on the one hand (see Sub-section III.2), and a rule on the law applicable to fault-based liability for low-risk systems (Article 9 DR), on the other hand (see Sub-section III.3.). The latter rule refers to the domestic laws of the Member State in which the harm or damage occurred. In this chapter, I have analysed and evaluated this proposal against the background of the already existing European regulatory framework on private international law, in particular the Rome II Regulation. In sum, compared with Rome II, the conflicts approach of the draft regulation would be a regrettable step backwards in many ways. On 21 April 2021, the European Commission presented its proposal for an ‘Artificial Intelligence Act’.Footnote 142 However, this proposal contains neither rules on civil liability nor provisions on the pertinent choice-of-law issues. Thus, it remains to be seen how the relationship between the European Parliament’s draft regulation and Rome II will be designed and fine-tuned in the further course of legislation.

Footnotes

12 Liability for Artificial Intelligence The Need to Address Both Safety Risks and Fundamental Rights Risks

1 European Commission, ‘Communication from the Commission to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions: Fostering a European Approach to Artificial Intelligence’ COM (2021) 205 final.

2 European Commission, ‘Annex to the Communication from the Commission to the European Parliament, the European Council, the Council, the European Economic and Social Committee and the Committee of the Regions. New Coordinated Plan on AI 2021 Review’ COM (2021) 205 final.

3 European Commission, ‘Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts’ COM (2021) 206 final.

4 European Commission, ‘Proposal for a Regulation of the European Parliament and of the Council on Machinery Products’ COM (2021) 202 final.

5 European Commission, ‘Proposal for a Regulation of the European Parliament and of the Council on General Product Safety, Amending Regulation (EU) No 1025/2012 of the European Parliament and of the Council, and Repealing Council Directive 87/357/EEC and Directive 2001/95/EC of the European Parliament and of the Council’ COM (2021) 346 final.

6 European Commission, ‘Civil Liability: Adapting Liability Rules to the Digital Age and Artificial Intelligence’ https://ec.europa.eu/info/law/better-regulation/have-your-say/initiatives/12979-Civil-liability-adapting-liability-rules-to-the-digital-age-and-artificial-intelligence_en; this chapter was written in spring 2021, only certain sections have been updated.

7 Council Directive 85/374/EEC of 25 July 1985 on the Approximation of the Laws, Regulations and Administrative Provisions of the Member States Concerning Liability for Defective Products [1985] OJ L 2010/29; see European Commission, ‘Commission Staff Working Document. Evaluation of Council Directive 85/374/EEC of 25 July 1985’ SWD (2018) 157 final.

8 European Commission, ‘Adapting Liability Rules to the Digital Age and Artificial Intelligence’ Inception Impact Assessment (Ares(2021)4266516).

9 European Commission, ‘Register of Commission Expert Groups, Expert Group on Liability and New Technologies (E03592)’ (European Commission, 9 March 2018) https://ec.europa.eu/transparency/expert-groups-register/screen/expert-groups/consult?do=groupDetail.groupDetail&groupID=3592&Lang=NL.

10 Directorate-General for Justice and Consumers, ‘Liability for Artificial Intelligence and Other Emerging Digital Technologies’ (European Commission, 27 November 2019) https://op.europa.eu/en/publication-detail/-/publication/1c5e30be-1197-11ea-8c1f-01aa75ed71a1/language-en/format-PDF (hereafter ‘NTF Expert Group’).

11 European Commission, ‘Report from the Commission to the European Parliament, the Council and the European Economic and Social Committee on the Safety and Liability Implications of Artificial Intelligence, the Internet of Things and Robotics’ COM (2020) 64 final.

12 European Commission, ‘White Paper on Artificial Intelligence: A European Approach to Excellence and Trust’ COM (2020) 65 final.

13 European Parliament Resolution of 16 February 2017 with Recommendations to the Commission on Civil Law Rules on Robotics, P8_TA (2017)0051 (hereafter EP Resolution on Civil Law Rules on Robotics).

14 G Wagner, ‘Robot Liability’ in S Lohsse, R Schulze, and D Staudenmayer (eds), Liability for Artificial Intelligence and the Internet of Things (2019) 44 et seq; BA Koch, ‘Product Liability 2.0: Mere Update or New Version?’ in S Lohsse, R Schulze and D Staudenmayer (eds), Liability for Artificial Intelligence and the Internet of Things (2019) (hereafter Koch, 'Product Liability 2.0: Mere Update or New Version?’); G Spindler, ‘Roboter, Atomation, künstliche Intelligenz, selbst-steuernde Kfz – Braucht das Recht neue Haftungskategorien?’ (2015) CR 766, 773; H Eidenmüller, ‘The Rise of Robots and the Law of Humans’ (2017) ZEuP 765, 774 et seq; R Schaub, ‘Interaktion von Mensch und Maschine’ (2017) JZ, 342, 345.

15 European Parliament Resolution of 20 October 2020 with Recommendations to the Commission on a Civil Liability Regime for Artificial Intelligence (2020/2014(INL)) P9_TA(2020)0276 (hereafter EP Resolution on a Civil Liability Regime for AI).

16 In previous publications, I have referred to the two types as ‘physical’ and ‘social’ risks, see e.g. JP Schneider and C Wendehorst, ?Response to the Public Consultation on the White Paper: On Artificial Intelligence: A European Approach to Excellence and Trust, COM(2020) 65 final? (ELI 2020); C Wendehorst and Y Duller, Safety and Liability Related Aspects to Software (European Commission, 2021) (hereafter Wendehorst and Duller, ‘Safety and Liability’) 26 et seq; C Wendehorst, ‘Strict Liability for AI and Other Emerging Technologies’ (2020) JETL (hereafter Wendehorst, ‘Strict Liability’) 150, 161 et seq.

17 C Wendehorst, “Liability for Pure Data Loss” in E Karner and others (eds) Festschrift für Helmut Koziol (2020) 225(hereafter Wendehorst, ‘Liability for Pure Data Loss’).

18 See Wendehorst, ‘Liability for Pure Data Loss’ (Footnote n 17) 225; G Wagner, ‘§ 823’ in FJ Säcker and others (eds), Münchener Kommentar zum BGB (8th ed. 2020) para 245 et seq; L Specht, Konsequenzen der Ökonomisierung informationeller Selbstbestimmung (2012) 230; F Faust, ‘Digitale Wirtschaft: Analoges Recht: Braucht das BGB ein Update?’ in Ständige Deputation des Deutschen Juristentages (ed), Verhandlungen des 71. Deutschen Juristentages – Band I – Gutachten Teil A (2016), 48.

19 Regulation (EU) 2016/679, Article 82(1); Council Directive 2004/113/EC of 13 December 2004 implementing the principle of equal treatment between men and women in the access to and supply of goods and services [2004] OJ L 373/37, Article 8(2); German Network Enforcement Act (Netzwerkdurchsetzungsgesetz, NetzDG, BGBl I S 3352); French Anti-Hate Speech Law (Loi Avia 2020/766); Austrian Anti-Hate Speech Law (Hass-im-Netz-Bekämpfungs-Gesetz, HiNBG, BGBl I 2020/148); Proposal for a Regulation of the European Parliament and the Council on a Single Market for Digital Services (Digital Services Act) and amending Directive 2000/31/EC, COM (2020) 825 final.

20 For an overview see G Brüggemeier, AC Ciacchi, and P O’Callaghan, Personality Rights in European Tort Law (2010).

21 C Wendehorst, ‘The Proposal for an Artificial Intelligence Act COM(2021) 206 from a Consumer Policy Perspective’ (Federal Ministry Republic of Austria for Social Affairs, Health, Care and Consumer Protection, 2021) (hereafter Wendehorst, ‘The Proposal for an AIA from a Consumer Policy Perspective’), 110.

22 Article 10:202(1) of the Principles of European Tort Law (hereafter PETL) prepared by the European Group on Tort Law http://egtl.org/PETLEnglish.html.

23 C van Dam, European Tort Law (2006) (hereafter Van Dam, European Tort Law) 147.

24 Article 3(3) Directive 2014/53/EU of the European Parliament and of the Council of 16 April 2014 on the Harmonisation of the Laws of the Member States Relating to the Making Available on the Market of Radio Equipment and Repealing Directive 1999/5/EC [2014] OJ L 153/62.

25 Regulation (EU) 2017/745 of the European Parliament and of the Council of 5 April 2017 on medical devices, amending Directive 2001/83/EC, Regulation (EC) No 178/2002 and Regulation (EC) No 1223/2009 and repealing Council Directives 90/385/EEC and 93/42/EEC, [2017] OJ L 117/1, Annex I, 14.2.

26 European Commission, ‘Proposal for a Regulation of the European Parliament and of the Council on Machinery Products’ COM (2021) 202 final, Annex III, 1.1.9. and 1.2.1.

27 European Commission, ‘Proposal for a Regulation of the European Parliament and of the Council on general product safety, amending Regulation (EU) No 1025/2012 of the European Parliament and of the Council, and repealing Council Directive 87/357/EEC and Directive 2001/95/EC of the European Parliament and of the Council’ COM (2021) 346 final, Article 7(1)(h).

28 PETL, Article 2:102(4); Van Dam, European Tort Law (Footnote n 20) 169.

29 G Brüggemeier, Tort Law in the European Union (2nd ed. 2018) para 385; B Wininger and others (eds), Digest of European Tort Law Volume 2: Essential Cases on Damage (2011) 383 et seq.

30 For a comparative report, see P Widmer (ed), Unification of Tort Law: Fault (2005).

31 PETL, Article 2:102.

32 Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation) [2016] OJ L 119/1.

33 Council Directive 2004/113/EC of 13 December 2004 implementing the principle of equal treatment between men and women in the access to and supply of goods and services.

34 Directive 2006/54/EC of the European Parliament and of the Council of 5 July 2006 on the implementation of the principle of equal opportunities and equal treatment of men and women in matters of employment and occupation [2005] OJ L 204/23, Article 18; Council Directive 2004/113/EC, Article 9; Council Directive 2000/78/EC of 27 November 2000 establishing a general framework for equal treatment in employment and occupation [2000] OJ L 303/16, Article 10.

35 Explicitly in sections 97 and 98 of the German Securities Trading Act.

36 See, for example, section 823(2) of the German Civil Code (Bürgerliches Gesetzbuch, BGB) and section 1311 of the Austrian Civil Code (Allgemeines Bürgerliches Gesetzbuch, ABGB).

37 J Fedtke and U Magnus in BA Koch and H Koziol (eds), Unification of Tort Law: Strict Liability (2002) 147.

38 Council Directive 85/374/EEC of 25 July 1985 on the approximation of the laws, regulations and administrative provisions of the Member States concerning liability for defective products [1985] OJ L 210/29; see for the implementation of the Directive in the Member States WH van Boom and others, ‘Product Liability in Europe’ in H Koziol and others (eds), Product Liability Fundamental Questions in a Comparative Perspective (2017) 255 et seq.

39 NTF Expert Group (Footnote n 10) 27 et seq.

40 See e.g. section 1315 of the Austrian Civil Code (ABGB).

41 PETL, Article 7:102(1 a) and Article 5:101 (1); BA Koch and H Koziol, ‘Country Report Austria’ in BA Koch and H Koziol (eds), Unification of Tort Law: Strict Liability (2002) 12, 15, 19.

42 BA Koch and H Koziol, ‘Comparative Conclusions’ in BA Koch and H Koziol (eds), Unification of Tort Law: Strict Liability (2002) 395 et seq.

43 Responsabilité du fait des choses, Article 1242 Code civil.

44 NTF Expert Group (Footnote n 10) Key Finding no 1(a) 32 et seq.

45 NTF Expert Group (Footnote n 10) Key Finding no 1(c) 32 et seq.

46 Council Directive 85/374/EEC, Article 6(1)(c), Article 7(b); P Machnikowski, ‘Conclusions’ in P Machnikowski (ed), European Product Liability: An Analysis of the State of the Art in the Era of New Technologies (2016) 669, 695.

47 NTF Expert Group (Footnote n 10) Key Finding no 1(g) 32 et seq.

48 Article 3(a) of the EP Resolution on a Civil Liability Regime for AI (n 15) defines ‘AI-system’ as ‘a system that is either software-based or embedded in hardware devices, and that displays behaviour simulating intelligence by, inter alia, collecting and processing data, analysing and interpreting its environment, and by taking action, with some degree of autonomy, to achieve specific goals’.

49 NTF Expert Group (Footnote n 10) Key Finding nos 1(d) and (e) at 32, 33.

50 NTF Expert Group (Footnote n 10) Key Finding no 1(b) 32, 33.

51 See also NTF Expert Group (Footnote n 10) Key Finding no 9, 39 et seq.

52 Wendehorst and Duller, ‘Safety and Liability’ (Footnote n 16) 93 et seq; Wendehorst, ‘Strict Liability’(Footnote n 16) 165 et seq.

53 NTF Expert Group (Footnote n 10) Key Findings nos 18 and 19, 45 et seq; H Zech, ‘Entscheidungen digitaler autonomer Systeme: Empfehlen sich Regelungen zu Verantwortung und Haftung?’ in Ständige Deputation des Deutschen Juristentages (ed), Verhandlungen des 73. Deutschen Juristentages – Band I – Gutachten Teil A (2020) (hereafter Zech, ‘Entscheidungen digitaler autonomer Systeme’) 76 et seq.

54 EP Resolution on Civil Law Rules on Robotics (n 13).

55 See e.g. the Open Letter to the European Commission Artificial Intelligence and Robotics (2018) www.robotics-openletter.eu/.

56 Data Ethics Commission, Opinion of the German Data Ethics Commission (BMJV, 2019) 219 www.bmjv.de/DE/Themen/FokusThemen/Datenethikkommission/Datenethikkommission_EN_node.html.

57 NTF Expert Group (Footnote n 10) Key Finding no 8, 36 et seq.

58 EP Resolution on Civil Law Rules on Robotics (n 13), paras 57, 59; G Borges, ‘New Liability Concepts: The Potential of Insurance and Compensation Funds’ in S Lohsse, R Schulze, and D Staudenmayer (eds), Liability for Artificial Intelligence and the Internet of Things (2019) 148 et seq; Zech, ‘Entscheidungen digitaler autonomer Systeme’ (Footnote n 53) 105 et seq.

59 J Hanisch, ‘Zivilrechtliche Haftungskonzepte für Robotik’ in E Hilgendorf (ed), Robotik im Kontext von Recht und Moral (2014) 43; J Eichelberger, ‘Zivilrechtliche Haftung für KI und Smarte Robotik’ in M Ebers and others (eds), Künstliche Intelligenz und Robotik (2020) 198.

60 NTF Expert Group (Footnote n 10) Key Findings nos 18 and 19, 45 et seq.

61 Directive 2006/42/EC of the European Parliament and of the Council of 17 May 2006 on machinery, and amending Directive 95/16/EC [2006] OJ L 157/24.

62 Annex I to COM (2021) 202 final, nos 24 and 25.

63 Annex III to COM (2021) 202 final, no 1(c).

64 Annex III to COM (2021) 202 final, no 1.3.7.

65 Annex III to COM (2021) 202 final, no 1(c).

66 Directive (EU) 2016/680 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data by competent authorities for the purposes of the prevention, investigation, detection or prosecution of criminal offences or the execution of criminal penalties, and on the free movement of such data, and repealing Council Framework Decision 2008/977/JHA [2016] OJ L 119/89.

67 Article 6(1)(b); Recital 63 COM (2021) 202 final.

68 Annex II section B to COM (2021) 202 final.

69 Article 2(2)(2) COM (2021) 202 final.

70 Wendehorst, ‘The Proposal for an AIA from a Consumer Policy Perspective’ (Footnote n 21) 75.

71 Annex III to COM (2021) 202 final, no 2.

72 European Commission, ‘Proposal for a Regulation of the European Parliament and of the Council on Machinery Products’ COM (2021) 202 final, Article 5(3).

73 Wendehorst, ‘The Proposal for an AIA from a Consumer Policy Perspective’ (Footnote n 22) 27; C Wendehorst and Y Duller, ‘Biometric Recognition and Behavioral Detection’ (European Parlament, 2021), 63; C Wendehorst and J Hirtenlehner, ‘Outlook on the future regulatory requirements for AI in Europe’ (2022), 35.

74 COM (2021) 206 final, explanatory note no 64.

75 Critical T Schmidt and S Voeneky, Chapter 8, in this volume.

76 EP Resolution on a Civil Liability Regime for AI (Footnote n 15).

77 NTF Expert Group (Footnote n 10) Key Findings nos 10 and 11.

78 See Article 3(e).

79 See Article 4(3).

80 Cf. EP Resolution on a Civil Liability Regime for AI (Footnote n 15) Article 4(4).

81 EP Resolution on a Civil Liability Regime for AI (Footnote n 15) Recommendation to the Commission no 16.

82 In fact, the drafting is not very clear with regard to this point. Recital 17 seems to underline that fault is always presumed and that the operators need to exonerate themselves. However, Recital 19 also refers to proof of fault by the victim.

83 EP Resolution on a Civil Liability Regime for AI (Footnote n 15) Article 4(1).

84 Cf. T Schmidt and S Voeneky, Chapter 8, in this volume, who suggest that companies that develop or produce high-risk AI should contribute to a fund that covers damages caused by AI-driven high-risk products or services.

85 EP Resolution on a Civil Liability Regime for AI (Footnote n 15) Recommendation to the Commission no 19.

86 See V 1(c).

87 Among the plethora of pleas made in this direction, see only C Twigg-Flesner in European Law Institute (ELI) (ed), Guiding Principles for Updating the Product Liability Directive for the Digital Age (2021) https://europeanlawinstitute.eu/fileadmin/user_upload/p_eli/Publications/ELI_Guiding_Principles_for_Updating_the_PLD_for_the_Digital_Age.pdf.

88 Wendehorst and Duller, ‘Safety and Liability’ (Footnote n 16) 68; Koch, ‘Product Liability 2.0 – Mere Update or New Version?’ (n 14) 102.

89 Wendehorst and Duller, ‘Safety and Liability’ (Footnote n 16) 6, 93.

90 See sub V 2(a).

91 Article 19(3) of Directive 2009/48/EC of the European Parliament and of the Council of 18 June 2009 on the safety of toys [2009] OJ L 170/1 1, last amended by Commission Directive (EU) 2018/725 of 16 May 2018.

92 As set out in Article 10 and Annex II of Directive 2009/48/EC. Note that the requirements are so far focused on mechanical/physical properties (e.g. sharp edges and weight), flammability, chemicals, and heavy metals restrictions, so there will be only very few AI-driven toys qualifying as ‘high-risk’ under the AIA.

93 As listed in section B of Annex II and largely exempt from the AIA itself by Article 2(2) COM (2021) 202 final.

94 NTF Expert Group (Footnote n 10) 24 et seq.

95 Wendehorst and Duller, ‘Safety and Liability’ (Footnote n 16) 92.

96 NTF Expert Group (Footnote n 10) Key Findings nos 18 and 19.

97 This would amount to a combination between Article 6:102 (Liability for Auxiliaries) and Article 4:202 (Enterprise Liability) PETL.

13 Forward to the Past A Critical Evaluation of the European Approach to Artificial Intelligence in Private International Law

1 Article 61(c) in conjunction with Article 65(b) of the Treaty of Amsterdam [1997] OJ C340/173 establishing the European Community; today Article 81(1) and (2)(c) of the Treaty on the Functioning of the European Union [2012] OJ C326/01; for an early assessment, see J Basedow, ‘The Communitarization of the Conflict of Laws under the Treaty of Amsterdam’ (2000) 37 CML Rev 687; on more recent developments, J von Hein, ‘EU Competence to Legislate in the Area of Private International Law and Law Reforms at the EU Level’ in P Beaumont and others (eds), Cross-Border Litigation in Europe (2017) 19.

2 See G Rühl and J von Hein, ‘Towards a European Code on Private International Law?’ (2015) 79 RabelsZ 701 et seq. (hereafter Rühl and von Hein, ‘Towards a European Code’).

3 Regulation (EC) 864/2007 of the European Parliament and of the Council of 11 July 2007 on the law applicable to non-contractual obligations (Rome II), [2007] OJ L 199/40; on the legislative history up to 2003, see J von Hein, ‘Die Kodifikation des europäischen Internationalen Deliktsrechts’ (2003) 102 ZVglRWiss 528, 529–533; up to 2007, J von Hein, ‘Die Kodifikation des europäischen IPR der außervertraglichen Schuldverhältnisse vor dem Abschluss?’ (2007) Versicherungsrecht 440; on the final compromise between the Council and the Parliament, see R Wagner, ‘Das Vermittlungsverfahren zur Rom II-VO’ in D Baetge, J von Hein, and M von Hinden (eds), Die richtige Ordnung, Festschrift für Jan Kropholler (2008) 715 (hereafter Wagner, ‘Das Vermittlungsverfahren’).

4 Regulation (EC) No 593/2008 of the European Parliament and of the Council of 17 June 2008 on the law applicable to contractual obligations (Rome I), 2008 OJ L 177/6.

5 Rühl and von Hein, ‘Towards a European Code’ (Footnote n 2) 713–715.

6 For a general survey, see L Wetenkamp, ‘IPR und Digitalisierung: Braucht das Internationale Privatrecht ein Update?’ (Beiträge zum Transnationalen Wirtschaftsrecht Volume 161, April 2019) https://telc.jura.uni-halle.de/sites/default/files/BeitraegeTWR/Heft161.pdf (hereafter Wetenkamp, ‘IPR und Digitalisierung’); on autonomous driving in particular, see T Kadner Graziano, ‘Cross-Border Traffic Accidents in the EU: The Potential Impact of Driverless Cars, Study for the JURI Committee’ (European Parliament, June 2016) www.europarl.europa.eu/RegData/etudes/STUD/2016/571362/IPOL_STU(2016)571362_EN.pdf (hereafter Kadner Graziano, ‘Driverless Cars’).

7 European Parliament, Draft Report 2020/2014(INL) (European Parliament, 27 April 2020) www.europarl.europa.eu/doceo/document/JURI-PR-650556_EN.pdf.

8 The text of this resolution is available at www.europarl.europa.eu/doceo/document/TA-9-2020-0276_EN.pdf.

9 For an overview, see the Parliament’s press release ‘Parliament Leads the Way on First Set of EU Rules for Artificial Intelligence’ (European Parliament, 20 October 2020) www.europarl.europa.eu/news/en/press-room/20201016IPR89544/parliament-leads-the-way-on-first-set-of-eu-rules-for-artificial-intelligence; on subsequent developments, see the overview by A Pato, ‘The EU’s Upcoming Regulatory Framework on Artificial Intelligence and Its Impact on PIL’ (EAPIL Blog, 12 July 2021) https://eapil.org/2021/07/12/the-eus-upcoming-regulatory-framework-on-artificial-intelligence-and-its-impact-on-pil.

10 On those rules, see H Sousa Antunes, ‘Civil Liability Applicable to Artificial Intelligence: A Preliminary Critique of the European Parliament Resolution of 2020’ (SSRN, 8 January 2021) https://ssrn.com/abstract=3743242; G Wagner, ‘Haftung für Künstliche Intelligenz: Eine Gesetzesinitiative des Europäischen Parlaments’ (2021) 29 ZEuP 545.

11 On the general issues of AI and private international law, see Wetenkamp, ‘IPR und Digitalisierung’ (Footnote n 6); see also the conference report by S Arnold, T Eick, and C Hornung, ‘Conference Report: Conflict of Laws 4.0 (Münster, Germany)’ (Conflict of Laws, 14 January 2020) https://conflictoflaws.net/2020/conference-report-conflict-of-laws-4-0-munster-germany.

12 FC von Savigny, A Treatise on the Conflict of Laws (1880) 69 et seq.

13 Commission’s proposal for a regulation of the European Parliament and of the Council on the law applicable to non-contractual obligations (Rome II), COM(2003) 427 final, reprinted in J Ahern and W Binchy (eds), The Rome II Regulation on the Law Applicable to Non-Contractual Obligations (2009) 301, 303.

14 Footnote Ibid, 305.

15 See J von Hein, ‘Art 14 Rome II para 1’ in GP Calliess and M Renner (eds), Rome Regulations – Commentary (3rd ed. 2020) with further references.

16 G Hohloch, ‘Place of Injury, Habitual Residence, Closer Connection and Substantive Scope: The Basic Principles’ (2007) 9 YbPIL 1.

18 Cf. Kadner Graziano, ‘Driverless Cars’ (Footnote n 6) 57.

19 SC Symeonides, ‘Rome II and Tort Conflicts: A Missed Opportunity’ (2008) 56 Am J Comp L 173; but cf. the balanced evaluation by P Hay, ‘Contemporary Approaches to Non-Contractual Obligations in Private International Law (Conflict of Laws) and the European Community’s “Rome II” Regulation’ (2007) 7 EuLF I-137, I-151, who calls the Rome II Regulation a ‘major achievement’.

20 Cf. SC Symeonides, ‘The American Revolution and the European Evolution in Choice of Law: Reciprocal Lessons’ (2008) 82 Tul L Rev 1741.

21 PJ Kozyris, ‘Rome II: Tort Conflicts on the Right Track! A Postscript to Symeon Symeonides “Missed Opportunity”’ (2008) 56 Am J Comp L 471.

22 For an earlier assessment of the perspectives for a convergence of US and European approaches to tort conflicts, see J Kropholler and J von Hein, ‘From Approach to Rule-Orientation in American Tort Conflicts?’ in JAR Nafziger and SC Symeonides (eds), Law and Justice in a Multistate World: Essays in Honor of Arthur T von Mehren (2002) 317.

23 Rome I, Article 19 (1) 2nd sentence; Rome II, Article 23(2).

24 Rome I, Article 19(1) 1st sentence; Rome II, Article 23(1).

25 See Wetenkamp, ‘IPR und Digitalisierung’ (Footnote n 6) 16 et seq.

26 See, e.g., G Teubner, ‘Digitale Rechtssubjekte? Zum Privatrechtlichen Status Autonomer Softwareagenten’ (2018) 218 AcP 155; cf. also, from the perspective of public international law, T Burri, ‘International Law and Artificial Intelligence’ (2017) 60 German Yb Int’l L 91, 95–98.

27 TFEU, Articles 49 and 54; see J von Hein, ‘Corporations in European Private International Law: From Case-Law to Codification?’ (2015) 17 JYIL 90.

28 European Group for Private International Law, Draft Rules on the Law Applicable to Companies and Other Bodies, Milan (GEDIP, 16–18 September 2016) https://gedip-egpil.eu/wp-content/uploads/2016/09/Societe-TxtSousGroup-1.pdf; for closer analysis, see J von Hein, ‘Der Vorschlag der GEDIP für eine EU-Verordnung zum Internationalen Gesellschaftsrecht’ in B Hess, E Jayme, and HP Mansel (eds), Europa als Rechts- und Lebensraum, Liber Amicorum für Christian Kohler (2018) 551.

29 Rome II, Recital 11 2nd sentence; on the principle of autonomous interpretation of Rome II, see CJEU, Case C‑350/14 Florin Lazar v Allianz SpA (10 December 2015) para 21 (hereafter CJEU, Florin Lazar).

30 Rome II, Recital 11 3rd sentence.

31 Rome II, Article 2(2); CJEU, Florin Lazar (Footnote n 29) para 22.

32 Rome II, Article 1(1) 1st sentence.

33 Rome II, Article 1(1) 2nd sentence.

34 On international governmental liability for German military operations in Afghanistan, see BGHZ 212, 173 (Bundesgerichtshof III ZR 140/15).

35 Rome II, Article 1(2)(g).

36 Einführungsgesetz zum Bürgerlichen Gesetzbuch (EGBGB) – Introductory Act to the Civil Code of September 21, 1994, Federal Law Gazette 1994 I 2494, as amended by the Gesetz zum Internationalen Güterrecht und zur Änderung von Vorschriften des Internationalen Privatrechts, Federal Law Gazette 2018 I 2573, 2580; English translation by J Mörsdorf-Schulte available at www.gesetze-im-internet.de/englisch_bgbeg/.

37 Rome II, Article 3.

38 For further details, see A Halfmeier, ‘Article 2 Rome II paras 1–8’ in GP Calliess and M Renner (eds), Rome Regulations: Commentary (3rd ed. 2020).

39 See Rome II, Recital 15: ‘The principle of the lex loci delicti commissi is the basic solution for non-contractual obligations in virtually all the Member States, but the practical application of the principle where the component factors of the case are spread over several countries varies. This situation engenders uncertainty as to the law applicable’; cf. T Kadner Graziano, ‘General Principles of Private International Law of Tort in Europe’ in J Basedow, H Baum, and Y Nishitani (eds), Japanese and European Private International Law in Comparative Perspective (2008) 243, 247; A Nuyts, ‘La règle générale de conflit de lois en matière non contractuelle dans le Règlement Rome II’ (2008) Rev dr comm belge 489, 492.

40 Rome II, Recital 16: ‘Uniform rules should enhance the foreseeability of court decisions and ensure a reasonable balance between the interests of the person claimed to be liable and the person who has sustained damage. A connection with the country where the direct damage occurred (lex loci damni) strikes a fair balance between the interests of the person claimed to be liable and the person sustaining the damage, and also reflects the modern approach to civil liability and the development of systems of strict liability.’

41 J von Hein, Das Günstigkeitsprinzip im Internationalen Deliktsrecht (1999), 217–220; K Thorn, ‘Art 4 Rome II para 1’ in C Grüneberg, Bürgerliches Gesetzbuch (81st ed. 2022).

42 FG Alférez, ‘The Rome II Regulation: On the Way Towards a European Private International Law Code’ (2007) EuLF I-77, I-84; L de Lima Pinheiro,Choice of Law on Non-Contractual Obligations between Communitarization and Globalization: A First Assessment of EC Regulation Rome II’ (2008) 44 RDIPP 5, 16.

43 Cf. J Basedow, ‘EC Conflict of Laws: A Matter of Coordination’ in L de Lima Pinheiro (ed), Seminário Internacional sobre a Comunitarizaçao do Direito Internacional Privado (2005) 26; A Junker, ‘Die Rom II-Verordnung: Neues Internationales Deliktsrecht auf europäischer Grundlage’ (2007) NJW 3675, 3678 (noting that the place of injury will frequently coincide with the victim’s habitual residence); T Petch, ‘The Rome II Regulation: An Update, Part I’ (2006) JIBLR 449, 454.

44 Article 40(1) of the German EGBGB (Footnote n 36); Article 62(1) of the Italian Code on Private International Law of May 31, 1995, Legge n. 218, Riforma del sistema italiano di diritto internazionale privato, Supplemento ordinario Footnote n 68 alla Gazzetta Ufficiale Footnote n 128, June 3, 1995, reprinted in (1997) 61 RabelsZ 344 (hereafter Italian PIL Code); cf. A Junker, ‘Kollisionsnorm und Sachrecht im IPR der unerlaubten Handlung’ in R Michaels and D Solomon (eds), Liber Amicorum Klaus Schurig (2012) 81, 82 et seq.

45 Rome II, Articles 5 to 9.

46 Rome II, Article 4(1).

47 On this group of cases, see J von Hein, ‘Article 4 and Traffic Accidents’ in J Ahern and W Binchy (eds), The Rome II Regulation on the Law Applicable to Non-Contractual Obligations (2009) 153; A Junker, ‘Das Internationale Privatrecht der Straßenverkehrsunfälle nach der Rom II-Verordnung’ (2008) JZ 169; T Kadner Graziano, ‘Internationale Verkehrsunfälle’ (2011) ZVR 40.

48 Hague Convention on the Law Applicable to Traffic Accidents of May 4, 1971, in Hague Conference on Private International Law (ed), Statute – Conventions – Protocol – Principles, The Hague 2020, No. 19; English text also available at www.hcch.net/index_en.php?act=conventions.text&cid=81; Hague Convention on the Law Applicable to Products Liability of October 2, 1973, in Hague Conference on Private International Law (ed), Statute – Conventions – Protocol – Principles, The Hague 2020, No. 22 and (1973) 37 RabelsZ 594.

49 HCTA: Austria, Belgium, Croatia, Czech Republic, France, Latvia, Lithuania, Luxembourg, The Netherlands, Poland, Slovakia, Slovenia, Spain; HCP: Croatia, Finland, France, Luxembourg, the Netherlands, Slovenia, Spain.

50 On the negotiations, see the detailed account by Wagner, ‘Das Vermittlungsverfahren’ (Footnote n 3) 726 et seq.

51 For a closer analysis of the problems under public international and EU law, see C Brière, ‘Réflexions sur les interactions entre la proposition de règlement “Rome II” et les conventions internationales’ (2005) 132 Clunet 677.

52 Rome II, Article 28(1); see G Garriga, ‘Relationships Between “Rome II” and Other International Instruments’, (2007) 9 YbPIL 137.

53 Rome II, Article 28(2).

54 HCTA: Belarus, Bosnia & Hercegovina, Macedonia, Montenegro, Morocco, Serbia, Switzerland, Ukraine; HCP: Macedonia, Montenegro, Norway, Serbia.

55 H Ofner, ‘Die Rom II-Verordnung – Neues Internationales Privatrecht für außervertragliche Schuldverhältnisse in der Europäischen Union’ (2008) ZfRV 2008, 1315 et seq.

56 A Staudinger, ‘Das Konkurrenzverhältnis zwischen dem Haager Straßenverkehrsübereinkommen und der Rom II-VO’ in D Baetge, J von Hein, and M von Hinden (eds), Die richtige Ordnung, Festschrift für Jan Kropholler (2008), 671; T Thiede and M Kellner, ‘“Forum shopping” zwischen dem Haager Übereinkommen über das auf Verkehrsunfälle anzuwendende Recht und der Rom-II-Verordnung’ (2007) Versicherungsrecht 1624.

57 Rome II, Article 24.

58 See in more detail Kadner Graziano, ‘Driverless cars’ (Footnote n 6) 37 et seq.

59 Rome II, Article 4(1).

60 Rome II, Article 4(2).

61 For example, EGBGB, Article 40(2) (Footnote n 36); Article 2(3) Wet Conflictenrecht Onrechtmatige Daad of April 11, 2001, (2001) Staatsblad van het Koninkrijk der Nederlanden, No 190, German translation in (2004) IPRax 157, now repealed and substituted by Article 159 Book 10 of the Dutch Civil Code, 19 May 2011, (2011) Staatsblad van het Koninkrijk der Nederlanden, No 272, English translation in (2011) 13 YbPIL 657, which mandates an analogous application of the Rome II Regulation to cases outside of its scope; Article 99(1) No 1 Loi du 16 juillet 2004 portant le Code de droit international privé (Belgian Law of July 16, 2004, holding the Code of Private International Law), (2004) Moniteur Belge 57344 (French/Dutch), official German translation in (2005) Belgisch Staatsblad 48274, English translation in (2006) 70 RabelsZ 358; some codifications take citizenship into account as well, for example, Article 62(2) Italian PIL Code (Footnote n 44); Article 45(3) of the Portuguese Civil Code (Código Civil Português) Decreto-Lei (Footnote n 47) 344 of November 25, 1966, in W Riering (ed), IPR-Gesetze in Europa (1997) 108.

62 T Dornis, ‘When in Rome, Do as the Romans Do? A Defense of the Lex Domicilii Communis in the Rome II-Regulation’ (2007) EuLF I-152, I-157; it is not convincing to argue that the parties could reach the same result by choosing the applicable law pursuant to Article 14 Rome II, see H Unberath, J Cziupka and S Pabst ‘Article 4 Rome II para 63’ in T Rauscher (ed), Europäisches Zivilprozess- und Kollisionsrecht (EuZPR/EuIPR), Kommentar, Volume 3: Rom I-VO, Rom II-VO (4th ed. 2016), because it will in many cases be impossible to reach a consensus on the applicable law after the accident has occurred; cf. G Rühl, ‘Article 4 Rome II para 85’ in B Gsell and others (eds), Beck-Online Großkommentar (1 December 2017).

63 Cf. A Junker, ‘Article 4 Rome II para 37’ in F J Säcker and others (eds), Münchener Kommentar zum Bürgerlichen Gesetzbuch, Volume 13: Internationales Privatrecht II (8th ed. 2021); T Kadner Graziano, ‘Le nouveau droit international privé communautaire en matière de responsabilité extracontractuelle’ (2008) 97 Rev crit dr int priv 445, 462; C von Bar and P Mankowski, Internationales Privatrecht Volume 2 (2nd ed. 2019) para 188.

64 Rome II, Article 4(3).

65 For a comprehensive monographic treatment, see A Vogeler, Die freie Rechtswahl im Kollisionsrecht der außervertraglichen Schuldverhältnisse (2013).

66 A Briggs, Agreements on Jurisdiction and Choice of Law (2008) para 10.72 (‘entirely rational, and a great step forward’); Editorial Comments, (2007) 44 CML Rev 1567, 1570 (‘[L]egal certainty for the parties is the winner’); E O’Hara O’Connor and L Ribstein, ‘Rules and Institutions in Developing a Law Market: Views from the United States and Europe’ (2008) 82 Tul L Rev 2147, 2167 et seq.; but cf. also TM de Boer, ‘Party Autonomy and Its Limitations in the Rome II Regulation’ (2007) 9 YbPIL 19, 22 (criticising Recital 31 as ‘not very convincing’) (hereafter de Boer, ‘Party Autonomy’).

67 In particular Rome II, Articles 4(3) and 5(2).

68 A functional complementarity ignored by de Boer, ‘Party Autonomy’ (Footnote n 66) 22.

69 C von Bar and U Drobnig, Study on Property Law and Non-Contractual Liability as They Relate to Contract Law (European Commission – Health and Consumer Protection Directorate-General, SANCO/2002/B5/010, 2004).

70 J von Hein, ‘Rechtswahlfreiheit im Internationalen Deliktsrecht’ (2000) 64 RabelsZ 595, 601; P Picht, ‘Article 14 Rome II para 18’ in T Rauscher (ed), Europäisches Zivilprozess- und Kollisionsrecht EuZPR/EuIPR Kommentar, Volume 3: Rom I-VO, Rom II-VO (4th ed. 2016).

71 Cf. Rome II, Recital 20.

72 A Junker ‘Article 5 Rome II para 13’ in F J Säcker and others (eds), Münchener Kommentar zum Bürgerlichen Gesetzbuch, Volume 13: Internationales Privatrecht II (8th ed. 2021); M Lehmann, ‘Article 5 Rome II para 24’ in R Hüßtege and HP Mansel (eds), Bürgerliches Gesetzbuch: Rom Verordnung, Nomos-Kommentar, Volume 6: EuErbVO, HUP (3rd ed. 2019) (hereafter Lehmann, ‘Article 5 Rome II para 24’).

73 Commission’s Proposal for a Rome II Regulation (Footnote n 13), COM(2003) 427 final, 14; concurring A Junker ‘Article 5 Rome II para 3’ in FJ Säcker and others (eds), Münchener Kommentar zum Bürgerlichen Gesetzbuch, Volume 13: Internationales Privatrecht II (8th ed. 2021); Lehmann, ‘Article 5 Rome II para 24’ (Footnote n 72); K Thorn, ‘Article 5 Rome II para 3’ in C Grüneberg (ed), Bürgerliches Gesetzbuch (81st ed. 2022); H Unberath, J Cziupka, and S Pabst, ‘Article 5 Rome II para 38’ in T Rauscher (ed), Europäisches Zivilprozess- und Kollisionsrecht (EuZPR/EuIPR), Kommentar, Volume 3: Rom I-VO, Rom II-VO (4th ed. 2016); O Remien, ‘Art. 5 Rome II para 4’ in HT Soergel, Bürgerliches Gesetzbuch mit Einführungsgesetz und Nebengesetzen BGB, Volume 27/1: Rom II-VO; Internationales Handelsrecht; Internationales Bank- und Kapitalmarktrecht (13th ed. 2019) (hereafter Remien, ‘Art. 5 Rome II para 4’).

74 Council Directive 85/374/EEC of 25 July 1985 on the approximation of the laws, regulations and administrative provisions of the Member States concerning liability for defective products [1985] OJ L 210/29 (Product Liability Directive); as amended by Directive 1999/34/EC of the European Parliament and of the Council of 10 May 1999, [1999] OJ L 141/20.

75 Product Liability Directive, Article 2 1st sentence; see G Wagner, ‘§ 2 ProdHaftG para 15’ in FJ Säcker and others (eds), Münchener Kommentar zum Bürgerlichen Gesetzbuch, Volume 7: Schuldrecht – Besonderer Teil IV (8th ed. 2020) (hereafter Wagner, ‘§ 2 ProdHaftG para 15’); J Oechsler, ‘§ 2 ProdHaftG para 64’ in J von Staudinger (ed) Kommentar zum Bürgerlichen Gesetzbuch mit Einführungsgesetz und Nebengesetzen, Buch 2: Recht der Schuldverhältnisse, §§ 826–829; ProdHaftG (Unerlaubte Handlungen 2, Produkthaftung) (2014) (hereafter Oechsler, ‘§ 2 ProdHaftG para 64’); on product liability in the USA, cf. Restatement (Third) of Torts: Products Liability § 19 (1998), with further references in Comment d; for closer analysis, see WC Powers Jr., ‘Distinguishing Between Products and Services in Strict Liability’ (1984) 62 NCL Rev 415, 418, 425; MD Scott, ‘Tort Liability for Vendors of Insecure Software: Has the Time Finally Come?’ (2008) 67 Md L Rev 425; FE Zollers and others, ‘No More Soft Landings for Software: Liability for Defects in an Industry that Has Come of Age’ (2005) 21 Santa Clara Computer and High Tech LJ 745.

76 T Hoeren, ‘Review of “Nils Jansen, Die Struktur des Haftungsrechts”’ (2004) 121 SavZ/Germ 590, 593.

77 C Twigg-Flesner (ed), Guiding Principles for Updating the Product Liability Directive for the Digital Age (2021), available at https://europeanlawinstitute.eu/fileadmin/user_upload/p_eli/Publications/ELI_Guiding_Principles_for_Updating_the_PLD_for_the_Digital_Age.pdf.

78 Wagner, ‘§ 2 ProdHaftG para 15’ (Footnote n 75); Oechsler, ‘§ 2 ProdHaftG para 64’ (Footnote n 75).

79 See, for example, Oechsler, ‘§ 2 ProdHaftG para 69’ (Footnote n 75) (affirmative); and Wagner, ‘§ 2 ProdHaftG para 15’ (Footnote n 75) (negative), both with further references.

80 H Unberath, J Cziupka, and S Pabst, ‘Art. 5 Rome II para 40’ in T Rauscher (ed), Europäisches Zivilprozess- und Kollisionsrecht (EuZPR/EuIPR), Kommentar, Volume 3: Rom I-VO, Rom II-VO (4th ed. 2016); Remien, ‘Article 5 Rome II para 4’ (Footnote n 73).

81 G Wagner, ‘§ 2 ProdHaftG para 21’ in FJ Säcker and others (eds), Münchener Kommentar zum Bürgerlichen Gesetzbuch, Volume 7: Schuldrecht – Besonderer Teil IV (8th ed. 2020).

82 Rome II, Article 14.

83 Rome II, Article 5(2).

85 Rome II, Articles 4(2) and 5(1).

86 Article 63 of the Italian PIL Code (Footnote n 44).

87 Article 135 of the Swiss PIL Code of 18 December 1987, SR 291, available at www.fedlex.admin.ch/eli/cc/1988/1776_1776_1776/de.

88 Rome II, Article 5(1)(a).

89 Rome II, Article 5(1)(b).

90 Rome II, Article 5(1)(c).

91 Rome II, Recital 20.

92 Rome II, Article 4(1) place of damage.

93 Rome II, Article 5(1)(c).

94 See Kadner Graziano, ‘Driverless cars’ (Footnote n 6).

95 Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation) [2016] OJ L 119/1.

96 See JD Lüttringhaus, ‘Das Internationale Datenprivatrecht: Baustein des Wirtschaftskollisionsrechts des 21. Jahrhunderts – Das IPR der Haftung für Verstöße gegen die EU-Datenschutzgrundverordnung’ (2018) 117 ZVglRWiss 50.

97 On the controversy, see von Hein, ‘Abschluss’ (Footnote n 3) 441; for a comprehensive theoretical and comparative analysis, see R Michaels, ‘EU Law as Private International Law? Reconceptualising the Country-of-Origin Principle as Vested Rights Theory’ (2006) 2 J Priv Int’l L 195.

98 Directive 2000/31/EC of 8 June 2000 on certain legal aspects of information society services, in particular electronic commerce, in the Internal Market (Directive on electronic commerce), [2000] OJ L 178/1.

99 E-Commerce Directive, Article 1(4).

100 Cf. H Heiss and LD Loacker, ‘Die Vergemeinschaftung des Kollisionsrechts der Außervertraglichen Schuldverhältnisse durch Rom II’ (2007) 129 Juristische Blätter 613, 617, who criticise the Directive as ‘wenig erhellend’ (‘little enlightening’).

101 CJEU, Joined Cases C-509/09 and C-161/10 eDate Advertising GmbH and Others v X and Société MGN Limited (25 October 2011).

102 Rome I, Article 1(1).

104 CJEU, Joined Cases C‑359/14 and C‑475/14 ‘ERGO Insurance’ SE, represented by ‘ERGO Insurance’ SE Lietuvos filialas, v ‘If P&C Insurance’ AS, represented by ‘IF P&C Insurance’ AS filialas (C‑359/14), and ‘Gjensidige Baltic’ AAS, represented by ‘Gjensidige Baltic’ AAS Lietuvos filialas, v ‘PZU Lietuva’ UAB DK (C‑475/14) (21 January 2016), para 43.

105 M Lehmann and F Krysa, ‘Blockchain, Smart Contracts und Token aus der Sicht des (Internationalen) Privatrechts’ [2019] Bonner Rechtsjournal 90; Wetenkamp, ‘IPR und Digitalisierung’ (Footnote n 6) 11; from a comparative point of view, see FA Schurr, ‘Anbahnung, Abschluss und Durchführung von Smart Contracts im Rechtsvergleich’ (2019) 118 ZVglRWiss 231.

106 Rome I, Article 2.

107 See M McParland, The Rome I Regulation on the Law Applicable to Contractual Obligations (2015) paras 9.01 et seq (herafter McParland, ‘The Rome I Regulation’).

108 Rome I, Article 6(2); McParland, ‘The Rome I Regulation’ (Footnote n 107) paras 12.182–12.190.

109 Rome I, Article 4(1)(b); Wetenkamp, ‘IPR und Digitalisierung’ (Footnote n 6) 20 et seq.

110 McParland, ‘The Rome I Regulation’ (Footnote n 107) paras 12.01 et seq.

111 Rome I, Article 23.

112 See the enumeration in Article 46b(3) of the German EGBGB (Footnote n 36).

113 Directive (EU) 2019/770 of the European Parliament and of the Council on certain aspects concerning contracts for the supply of digital content and digital services of 20 May 2019 [2019] OJ L 136/1.

114 DR, Article 4.

115 DR, Article 8.

116 DR, Article 4.

117 DR, Article 5.

118 DR, Article 6.

119 DR, Article 7.

121 Pato (Footnote n 9) criticises the wording of Art. 2(1) DR as unclear and tends to favour an application of the DR ‘where a court of a Member State is seized with a dispute involving damages caused by AI systems’.

123 Rome II, Article 4.

125 Rome II, Article 5(1)(c).

126 His habitual residence: Rome II, Article 5(1)(a).

127 Place of acquisition: Rome II, Article 5(1)(b).

128 Rome II, Article 4.

129 DR, Article 8.

131 Rome II, Article 3.

132 For a more detailed analysis, see I Bach, ‘Article 15 Rome II para 1 et seq’ in P Huber (ed), Rome II Regulation – Pocket Commentary (2011) (hereafter Bach, ‘Article 15 Rome II para 1 et seq’); A Halfmeier, ‘Article 15 Rome II paras 4–6’ in GP Calliess and M Renner (eds), Rome Regulations – Commentary (3rd ed. 2020); G Palao Moreno, ‘Article 15 Rome II paras 13–15’ in U Magnus and P Mankowski (eds), Rome II Regulation (European Commentaries on Private International Law) (2019).

133 For a closer analysis, see A Halfmeier, ‘Article 15 Rome II paras 23–26’ in GP Calliess and M Renner (eds), Rome Regulations: Commentary (3rd ed. 2020); G Palao Moreno, ‘Article 15 Rome II para 23’ in U Magnus and P Mankowski (eds), Rome II Regulation (European Commentaries on Private International Law) (2019).

134 A Halfmeier, ‘Article 15 Rome II para 2’ in GP Calliess and M Renner (eds), Rome Regulations: Commentary (3rd ed. 2020); see also G Palao Moreno, ‘Art. 15 Rome II para 2’ in U Magnus and P Mankowski (eds) Rome II Regulation (European Commentaries on Private International Law) (2019) (prevention of forum shopping).

136 CJEU, Case C‑350/14 Florin Lazar v Allianz SpA (10 December 2015) para 29.

137 Bach, ‘Article 15 Rome II para 1 et seq.’ (Footnote n 133).

138 CJEU, Case C-149/18 Agostinho da Silva Martins v Dekra Claims Services Portugal SA (31 January 2019) para 33.

139 Kadner Graziano, ‘Driverless cars’ (Footnote n 6) 57.

140 As legally defined in DR, Article 3(d)–(f).

141 American Law Institute, Restatement of the Law: Conflict of Laws (1934).

142 European Commission, Proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on Artificial Intelligence (Artificial Intelligence Act) and amending certain Union legislative acts, COM(2021) 206 final.

Figure 0

Figure 12.1 The ‘physical’ and the ‘social’ dimensions of risks associated with AI

Save book to Kindle

To save this book to your Kindle, first ensure coreplatform@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×