I. Introduction
Risk is a concept that has become central in a developing and extensive field of European Union (EU) legislation, namely the proposed EU regulation on artificial intelligence (AI). In the risk-based approach of the AI Act proposal,Footnote 1 a “pyramid of criticality” divides AI-related risks into four categories: minimal risk, limited risk, high risk and unacceptable risk. At the same time, risk is not a legal concept, and a number of questions arise regarding its meaning in a legal context.Footnote 2 Legal arguments have been presented on a general level when it comes to regulating risks,Footnote 3 perhaps most notably by SunsteinFootnote 4 and Jarvis Thomson,Footnote 5 but much remains to be said about the handling of risk in specific legal areas. Against this background, this article will focus on the risk discourse in the chosen area of tort law. The starting point is the risk-based approach of the AI Act proposal, but as this proposal does not include rules on liability, the analysis will go on to cover other legal instruments containing proposed rules on liability for AI.
First, some central risk elements within traditional tort law principles and assessments will be discussed, with examples primarily from Swedish tort law, followed by an overview of the risk-based approach of the proposed AI Act. This risk structure will then be analysed critically from a tort law perspective, after which the discussion continues to the European Parliament’s (EP) resolution on liability for AI,Footnote 6 with references to the proposed revision of the product liability directiveFootnote 7 and the proposed adaptation of civil liability rules to AI.Footnote 8 Within this liability theme, certain parallels and differences are identified between the suggested regime on civil liability for AI systems and existing EU regulation, namely liability for data protection breaches in the GDPR.Footnote 9 In a final section, some conclusive remarks on the challenges of current legal developments in the field of AI and tort law are put forward.
II. Risk in tort law: fault and negligence, strict liability and assumption of risk
The first substantive area of law that comes to mind in relationship to the concept of risk is probably insurance law. In insurance law, risks are typically formulated within the specific clauses of the insurance contract. The insurance contract regulates the surrounding prerequisites for a binding contract concerning potential future loss, with certain clauses specifying risks covered (eg fire, water damage, burglary) and sometimes closer descriptions of these risks (burglary is not covered when the thief does not have to force entry).Footnote 10 As insurance contracts must be as precise as possible when it comes to regulating risks – and the actual risk as the base for the cost for insurance is calculated by statisticians – it becomes contrary to the nature of insurance law to present more general techniques or arguments relating to risk. In the search for legal arguments relating to risk, we will therefore continue to a nearby area of law.
In light of the specific risk regulation in insurance law, tort law may appear to be further from risk than insurance law. However, tort law instead offers a different and more general base for a legal analysis of the concept of risk. In tort law, the evaluation of risk is central both for strict and fault-based liability – a statement that will now be developed in relation to each category, starting with fault-based liability and continuing with strict liability.
Within fault-based liability, risk is an important part of the assessment of negligence. It is common to divide this assessment into two parts, where the first part encompasses a four-step circumstantial inventory. The risk for damage is the first step.Footnote 11 The next step is the extent of potential damage, followed by alternative actions and, lastly, the possibility (for the potential tortfeasor) to realise the circumstances under the earlier steps. In the second part of the assessment, the four steps of part 1 are weighed together – with the parameters risk plus potential damage on the one hand and alternatives plus insight on the other. Depending on the result of this balancing act, in each specific case a conclusion will be reached regarding the existence of negligence on the part of the potential tortfeasor.Footnote 12
Within the negligence assessment, issues concerning risk are relevant primarily when the risk for damage is analysed. Examples of such risks are typically related to the surroundings of a certain action: when a person kicks a ball in a park, was there a risk of anyone outside the game being hit? When someone lets their cat stroll freely, is there a risk of the cat breaking into a neighbour’s home and contaminating their carpet? When you are sawing a branch off a tree in your garden, is there a risk that it could fall and crush someone else’s bike? And so on.Footnote 13 The notable thing about these examples of risk issues is that they tend to be highly circumstantial; the assessment of minimal/limited/high/unacceptable risk will – despite certain objective standardsFootnote 14 – vary from case to case depending on every fact that is discernible. By comparison, an obvious challenge within the proposed AI regulation is that risk categories will be determined beforehand. Does such a system allow for a fair risk assessment, with consideration of individual factors? We will return to this question within the context of the suggested regulation.
Another area of tort law where risk evaluation is central is when strict liability is involved. Traditionally, strict liability has been imposed for “dangerous enterprises” such as industries and different forms of transport, but also for military shooting exercises and dog owners.Footnote 15 Where strict liability is applicable, fault is no longer a prerequisite for liability. Strict liability regulations define who is to be considered responsible when damage occurs – most often the owner of a company, dog, etc. If the risk brought about by, for example, a new technology materialises and damage thus is caused within an area where strict liability applies, the victim will not have to prove the occurrence of wrongdoing or the causal link between wrongdoing and the loss suffered.Footnote 16
Strict liability signals that great care is required when actions are taken within areas where the risks are significant, thereby demonstrating the preventative function in tort law.Footnote 17 This is where the arguments concerning risk come in. Imposing strict liability is a means of spreading risk and cost, placing responsibility on the actors with most control and knowledge and solving complex issues of causation.Footnote 18 There is an ongoing discussion in tort law regarding which activities motivate regulation with strict liability. For plaintiffs, strict liability naturally carries many benefits, enabling compensation in a variety of cases without demanding proof of negligence. However, there is also a societal interest of companies and individuals undertaking risky enterprises that we see as necessary to be performed. If the far-reaching form of strict liability is imposed too liberally, we run the risk of standing without suppliers of these risky yet necessary functions. In this sense, the issue of strict liability, too, boils down to a balancing assessment of risks and costs. How risky is the activity in question? Risky enough to be subject to a regulation imposing strict liability? Can we afford to lose suppliers following such an intervention?Footnote 19
A third relevant issue in tort law is how assumption of risk impacts the damages assessment. If a person has agreed to undertake an activity that may lead to damages, such as taking part in a football game, the assumption of risk is considered to limit the prospects of damages – up to a certain level.Footnote 20 For example, in Swedish law it is possible to consent to assault of the lower degree but not of the standard degree.Footnote 21 The discussion of risk is here focused on what the assumption of risk has encompassed in the individual case and if the surrounding actions and causes of damage can be said to have gone beyond the consent of the damaged. Once again, the assessment of how assumption of risk impacts damages is dependent on the category of harm and of the detailed circumstances of an individual case, something that can be discussed in relation to the predestined risk categories of the proposed AI regulation. Will assumption of risk even be possible when it comes to AI services in the EU, considering the proposed prohibitions and restrictions?
As can now be seen, several central areas of tort law – the negligence assessment, strict liability and assumption of risk – contain established risk assessments and may be useful when it comes to understanding the risk-based approach of regulating AI. In the following sections, these possible connections shall be examined in closer detail and certain challenges identified.
III. The risk-based approach of the proposed AI Act
The risk-based approach in the proposed EU regulation on AI is new at the EU level but has parallels in already existing legal instruments in the AI area.Footnote 22 These developments suggest that a risk-based approach may become the global norm for regulating AI.Footnote 23 As mentioned above, the proposed EU regulation differentiates between four levels of risk in a “pyramid of criticality”.Footnote 24 At the bottom tier is the vast majority of all existing AI systems.Footnote 25 They will be classified as low risk, thus falling outside the scope of the regulation. A large number of systems will have their place in the next level of the pyramid, “limited risk”, where the only obligations will be to supply certain information to users. A smaller number of systems will land in the next level up, “high risk”, where various restrictions apply. And at the top of the pyramid can be found the prohibited AI systems with “unacceptable risks”. Although the regulation has not yet been passed, there is a pressing need for the categorisation to be as clear as possible as soon as possible, so that businesses can predict whether their systems will be heavily regulated or not regulated at all and adapt their planning for the coming years. This leads us to the issue of how risk is to be defined in the regulation and how the proposed risk levels are differentiated. This will be investigated through a closer look at each step of the risk pyramid in turn, starting at the top.
The prohibited AI systems with unacceptable risks are those that contravene the Union’s values, such as through violating fundamental rights.Footnote 26 The prohibitions (placing on the market, putting into service or use of an AI system) include practices that have a significant potential to manipulate persons through subliminal techniques beyond their consciousness (so-called “dark patterns”) or to exploit vulnerable groups such as children in a way that is likely to cause physical or psychological harm. Social scoring by public authorities for general purposes through AI is also prohibited, as is the use of “real-time” remote biometric identification systems in publicly accessible spaces for the purposes of law enforcement (with certain limited exceptions). These prohibitions are explained in the proposed Article 5 of the regulation. What is more concretely at risk at the top level of the AI pyramid? To name a few of the Fundamental Rights, Freedoms and Equalities of the EU Charter: Human dignity (Article 1), Right to the integrity of the person (Article 3), Respect for private and family life (Article 7), Protection of personal data (Article 8), Non-discrimination (Article 21) and The rights of the child (Article 24). Despite the fact that these central values may not be risked by AI systems, it may be noted that exceptions and limitations are generally possible when it comes to the EU Charter of Fundamental Rights – in line with the provisions of Article 52 of the Charter.
To draw a parallel with the imposing of strict liability for risky activities in tort law, the conclusion in the case of the proposed AI regulation is that the AI systems listed under “unacceptable risk” are considered more risky than, for example, the handling of electricity, railway transport and dynamite blasting (all commonly subject to strict liability). This comparison also illustrates that the world has changed from one in which physical risks were at the centre of attention to today’s increasingly abstract risks where intangible values such as dignity and privacy are the targets of protection. This development in itself is challenging for both legislators and citizens to understand and regulate. Another interesting fact is that these prohibited practices are listed in the proposed regulation. For comparison within the EU, the starting point of the GDPR is to list under what conditions personal data may be treated (not the opposite).Footnote 27 A question that arises is whether the technique to specify prohibited systems creates a risk for gaps in the regulation that might enable new or unspecified AI practices to operate, despite being potentially harmful.
At the next level of the pyramid, high-risk systems are regulated. The high-risk systems are the main focus of the proposed AI Act and are subject to the most articles (6–51). High-risk systems “create a high risk to the health and safety or fundamental rights of natural persons”.Footnote 28 The difference compared with the prohibited practices of the top of the pyramid is thus that the high-risk AI systems do not in themselves contravene Union values but “only” threaten them. In the balancing act here performed by the EU legislator, the risk that high-risk systems pose to the rights to human dignity, privacy, data protection and other values is weighed against the benefits of the systems. Resembling the construction of strict liability in tort law, the benefits motivate that the systems are permitted – but with restrictions. In order to be allowed on the European market, high-risk systems will be subject to an ex-ante conformity assessment, certain mandatory safety measures, market supervision and follow-up conformity assessments. These restrictions build the “lifecycle” of high-risk systems.Footnote 29 A product safety marking known from EU product safety law – the CEFootnote 30 marking – will show that high-risk AI systems are in conformity with the requirements of the regulation (Article 49) and approved by the competent public authority. On a larger scale, a database with approved high-risk AI systems will be created at the EU level (Article 60).
How is a system classified as high risk? According to the explanatory memorandum of the proposed act, this will depend on the purpose of the system in connection with existing product safety legislation.Footnote 31 Two main high-risk categories are: (1) AI systems used as safety components of products that are subject to third-party ex-ante conformity assessment (machines, medical devices, toys); and (2) standalone AI systems with mainly fundamental rights implications, explicitly listed in an annex to the proposed AI Act (education, employment, public services, border control, law enforcement and more). This list should be seen as dynamic and may be adjusted in line with developing technology.
Continuing down the pyramid, limited-risk AI systems can be found at the third level and are regulated in the proposed Article 52 of the AI Act. Such systems are permitted with only certain transparency obligations. The obligations will apply for systems that interact with humans, systems that are used to detect emotions or to determine association with (social) categories based on biometric data and systems that generate or manipulate content (“deep fakes”).Footnote 32 The motivation for the proposed article is that people must be informed when they are interacting with AI systems and know that their emotions or characteristics are recognised with automatic means, or that an image, video or audio content is AI generated. Through being informed, people are given the opportunity to make informed choices.Footnote 33 The risk in this scenario is that a person may be misled, namely being led to believe that they are interacting with another person when they are in fact interacting with a system – or perceiving content that they believe to be authentic when it is actually manipulated. Such situations can lead to a decline in trust amongst consumers of new technologies, which would be undesirable. The EU goal is to build trust in the AI area – to achieve a responsible, balanced regulation that encourages the use of AI and thus boosts innovation. Trust requires that the respect for fundamental rights is maintained throughout the Union.Footnote 34
At the bottom of the pyramid of criticality, minimal-risk AI systems (such as spam filters, computer games, chatbots and customer service systems) fall outside the scope of the regulation.Footnote 35 All AI systems not explicitly included in the tiers above are defined as minimal risk, which means that most AI systems used today will not be subject to EU rules. The fact that this layer of the pyramid is described as “minimal risk” and not “non-existent risk” appears reasonable, as risk is practically never completely avoidable, whether it comes to AI or other aspects of life. Despite the classification of “minimal risk”, Article 69 of the proposed AI Act suggests that codes of conduct should be developed for these systems. With time, control mechanisms such as human oversight (Article 14), transparency and documentation could thus spread from regulated AI services to minimal-risk systems.
To summarise this overview of the risk-based approach in the AI Act proposal, the high-risk category is the absolute focus of the regulation. One may even ask why the “minimal-risk” category is included in the pyramid of criticality. The focus of the following section will be on how this risk structure with its different tiers relates to the tort law issues introduced in Section II.
IV. Some tort law reflections on the risk-based approach
The proposed “pyramid of criticality” of the AI Act is interesting from several different perspectives. One central issue that it sparks from a tort law point of view and our earlier discussion on risk is: if a risk assessment is to be fair and requirements based on risk proportionate, is it at all possible to determine risk beforehand in fixed categories? The first challenge here is, as presented above regarding the tort law method for evaluating negligence, that risk is typically assessed in a specific situation. The outcome of a risk assessment will thus vary depending on every single circumstance of a given situation. A second challenge is that risk can be described as something highly subjective that differs from person to person. The question is thus if it is possible to harmonise these different perceptions of risk and capture a large amount of risks while still achieving a balanced regulation that allows for innovation and a proportionate use of AI systems.
The pragmatic answer to the queries above would be that it is not possible – not even in an insurance contract – to describe every risk that an AI system could pose, let alone to different people, as the risks are largely unknown today. Therefore, standardisation is necessary. A general provision built on negligence, such as “a person who causes harm or loss by using an AI system without sufficient care shall compensate the damages”, would open up for individual interpretations of “sufficient care” and various experimental and potentially harmful uses of AI systems – where actions would be assessed only after damage of an unknown extent had already occurred. Such a system is reasonable (and well-established) concerning pure accidents in tort law, such as when someone smashes a vase or stumbles over someone’s foot, while in the case of AI there is a known element in every given situation: the involvement of an AI system. Instead of using a traditional general negligence assessment, the AI systems have been deemed so risky that the use of them must be regulated beforehand.Footnote 36
To develop the reasoning on the connection between the pyramid of criticality and the risk assessment within fault-based liability, it should be emphasised that the existence of predetermined risk categories in the proposed AI regulation does not exclude the impact of specific elements of the risk assessment. Within every risk category, the benefits of the AI systems (economic such as efficiency, social such as faster distribution of services) have been weighed against the risks (to fundamental rights and freedoms, health and safety, vulnerable persons) that they typically entail. With regard to each category, this can be explained according to the following.
In the case of prohibited practices, the conclusion of the balancing act is that the risks are generally too high to permit the systems – despite the benefits they offer. Regarding high-risk systems, the benefits motivate risks of a certain level (with control mechanisms in place). When it comes to limited-risk systems, the risks become so moderate that it is enough to make users aware of them. This could open up a discussion on another tort law phenomenon mentioned above: assumption of risk.Footnote 37 Minimal-risk AI appears uncontroversial as it brings benefits without any identifiable risks. This said, such risks could of course still exist – or, perhaps more likely, emerge. Who can guarantee that video game data or spam filter data will never be used in illicit ways, such as to profile individuals? In order to serve their function, the risk categories with their delimitations must continually be monitored and adapted according to the developments of the field.
Continuing to the concept of strict liability in tort law, the similarities are many between traditional arguments for strict liability and the EU approach to high-risk AI systems. As mentioned earlier, strict liability is considered suitable in areas where serious risks are presented by societally necessary but potentially dangerous activities.Footnote 38 These considerations match the thoughts behind permitting high-risk AI systems with certain restrictions. The risks they bring are generally so high that such restrictions are motivated.
So, will strict liability be introduced for these systems in the EU? The issue of liability is not addressed in the AI Act but in other legal initiatives. According to the proposal for a revised directive on product liability, AI products are covered by the directive – meaning that damages may be awarded for material harm (including medically recognised psychological harm) caused by a defective AI-enabled good.Footnote 39 Both in this proposal and in the proposed directive on civil liability for AI, a central theme is the alleviation of the plaintiff’s burden of proof in cases where it is challenging to establish the causal link between damages and an AI system.Footnote 40 While these proposals are thus not primarily focused on AI risk categories, the topic of risk is more prominent in other current initiatives such as the EP resolution on civil liability.Footnote 41 The proposed AI Act refers to this resolution and a number of other resolutions that, together with the regulation, will form a “wider comprehensive package of measures that address problems posed by the development and use of AI”.Footnote 42 The resolution on civil liability will be the focus of the next section of this paper.
V. Risk and damages in the European Parliament resolution on civil liability
In short, the EP resolution suggests harmonising the legal frameworks of the Member States concerning civil liability claims and imposing strict liability for operators of high-risk AI systems. This ambitious vision will most certainly lead to a number of challenges. Not only is the concept of high-risk AI systems novel – the introduction of strict liability in a new area is always controversial and, what is more, the attitude to strict liability differs significantly throughout the EU.Footnote 43
To put these challenges into perspective, let us take a closer look at the content of the resolution.
The EP starts with some general remarks on the objectives of liability, the concept of strict liability, the balancing of interests between compensation of damages and encouraging innovation in the AI sector and the possibility for Member States to adjust their liability rules and adapt them to certain actors or activities.Footnote 44 It goes on to state that the issue of a civil liability regime for AI should be the subject of a broad public debate, taking all interests involved into consideration so that unjustified fears and misunderstandings of the new AI technologies amongst citizens can be avoided.Footnote 45 Furthermore, the complications of applying traditional tort law principles such as risk assessments and causality requirements to AI systems are addressed:
… certain AI-systems present significant legal challenges for the existing liability framework and could lead to situations in which their opacity could make it extremely expensive or even impossible to identify who was in control of the risk associated with the AI-system, or which code, input or data have ultimately caused the harmful operation … this factor could make it harder to identify the link between harm or damage and the behaviour causing it, with the result that victims might not receive adequate compensation.Footnote 46
As an answer to these challenges, the model of concentrating liability for AI systems to certain actors (those who “create, maintain or control the risk associated with the AI-system”) is motivated in paragraph 7 of the resolution. In paragraph 10, it is stated that the resolution will focus on operators of AI systems, and the following paragraphs go on to lay down the foundations for operational liability.
The EP concludes that different liability rules should apply for different risks. This seems to be in line with the tort law reflections presented above in relation to the pyramid of criticality of the proposed general regulation on AI. Based on the endangering of the general public brought by the autonomous high-risk AI systems and the legal challenges that are posed to the existing civil liability systems of the Member States, the EP suggests that a common strict liability regime be set up for high-risk AI systems (paragraph 14). The suggestion is quite radical, as it means deciding beforehand that all high-risk AI systems resemble such “dangerous activities” that, as described above in connection with the tort law assessments, they motivate strict liability.
The resolution continues by stating that a risk-based approach must be based on clear criteria and an appropriate definition of “high risk” so as to provide for legal certainty. Helpfully, such a definition is actually suggested in detail in paragraph 15:
… an AI-system presents a high risk when its autonomous operation involves a significant potential to cause harm to one or more persons, in a manner that is random and goes beyond what can reasonably be expected … when determining whether an AI-system is high-risk, the sector in which significant risks can be expected to arise and the nature of the activities undertaken must also be taken into account … the significance of the potential depends on the interplay between the severity of possible harm, the likelihood that the risk causes harm or damage and the manner in which the AI-system is being used.
It is notable that this definition of risk builds on elements from established risk evaluations such as the negligence assessment in tort law described above, with its balancing of risk factors, potential harm and actors involved.Footnote 47 This could prove helpful when examples and arguments are needed for classification of AI systems. An immediate question concerns who is to conduct the risk assessment, as it may well be impacted by the assessor’s role as, for instance, operator or provider. Such conflicts of interests are supposedly to be avoided through a monitored system with notified bodies, according to Chapter 5 of the proposed AI regulation. The decisions of the notified bodies are to be appealable. Breaches of the requirements for high-risk AI systems (and any breaches relating to the prohibited unacceptable-risk AI systems) shall be subject to penalties including large administrative fines (Article 71). Thus, the preventative aspect forms an important part of the control system of the proposed AI regulation.
The EP suggests (paragraph 16) that all high-risk systems be listed in an Annex to the AI regulation, which should be reviewed every six months in order to capture technological developments. Comparing with product safety – an area where EU regulation has been in place for several decadesFootnote 48 and has served as inspiration for the suggested CE marking on the AI marketFootnote 49 – it is recommended (paragraph 17) that a high-risk assessment of an AI system is started at the same time as the product safety assessment. This implies that the risk assessment should not be stressed but may be complicated and time-consuming, which seems both realistic and reasonable considering the consequences of the risk classification.
AI systems that are not listed in the Annex should be subject to fault-based liability (paragraph 20), but with a presumption of fault on the operator. Such a construction would stress the responsibility of those in charge of AI systems with limited risk, and the knowledge of a lower threshold for compensation of damages could encourage consumers of limited-risk AI solutions. To balance this, it will still be possible for operators to exculpate themselves from fault-based liability by showing that they have fulfilled their duty of care. A similar construction, though based on strict liability, can be found in the GDPR’s Article 82 on damages for personal data breaches.Footnote 50 In conclusion, all AI systems that fall within the scope of the suggested AI regulation will thus carry stricter liability than the usual fault-based rule in tort law. Only the last tier, “minimal-risk AI systems”, will be subject to traditional negligence assessments. This is something that mirrors the attitude to AI systems as generally risky and in need of augmented control mechanisms, with pertaining solutions for compensation of losses.
The EP acknowledges that the launching of a common regime for strict liability is a major project at the European level.Footnote 51 In paragraph 19, some protected interests that should be covered by the planned regime are initially established:
… in line with strict liability systems of the Member States, the proposed Regulation should cover violations of the important legally protected rights to life, health, physical integrity and property, and should set out the amounts and extent of compensation, as well as the limitation period; … the proposed Regulation should also incorporate significant immaterial harm that results in a verifiable economic loss above a threshold harmonised in Union liability law, that balances the access to justice of affected persons and the interests of other involved persons.
The paragraph goes on to urge the Commission to re-evaluate and align the thresholds for damages in Union law (a separate question here is whether there are any such established thresholds) and analyse in depth “the legal traditions in all Member States and their existing national laws that grant compensation for immaterial harm, in order to evaluate if the inclusion of immaterial harm in AI-specific legislative acts is necessary and if it contradicts the existing Union legal framework or undermines the national law of the Member States”.
This wording recognises that the subject of immaterial harm is often sensitive in tort law. In Sweden, for instance, the development for immaterial damages has been reluctant over the years, with a careful expansion during the last few decades.Footnote 52 The main rule in Swedish tort law is still that economic harm is compensated without the victim having to rely on any particular rules, while immaterial harm is compensated only when a specific legal basis for the claim can be found.Footnote 53 Against this background, it is important that the possibilities throughout the EU to compensate immaterial harm resulting from the use of AI systems is scrutinised and, if necessary, enforced. Just like when it comes to harm resulting from breaches of data protection rules, immaterial harm could potentially become the most common form of harm caused by AI systems. Therefore, in similarity to Article 82 GDPR on damages, regulation of compensation for immaterial harm may well be motivated in the AI area.
VI. Joint responsibility and insuring strict liability for AI systems
Another separate liability issue mentioned in the introduction of the EP resolution is the probability that AI systems will often be combined with non-AI systems, such as human actions.Footnote 54 How should such interaction be evaluated from a tort law perspective? Some overarching measures are suggested by the EP:
… sound ethical standards for AI-systems combined with solid and fair compensation procedures can help to address those legal challenges and eliminate the risk of users being less willing to accept emerging technology; … fair compensation procedures mean that each person who suffers harm caused by AI-systems or whose property damage is caused by AI-systems should have the same level of protection compared to cases without involvement of an AI-system; … the user needs to be sure that potential damage caused by systems using AI is covered by adequate insurance and that there is a defined legal route for redress.
Thus, predictability is seen as key for the development of a liability regime for AI systems. Victims are to have the same level of protection and to receive fair compensation for damages with or without the involvement of an AI system (or with the involvement of both). It is important also to note that insurance will be required for operators of AI systems. This is common when strict liability is involved,Footnote 55 as the objective of making damage claims easier for victims will be forsaken if there is no actual money to be found at the tortfeasor’s end. Additionally, it is easier for businesses to insure against the risk of liability than it is for individuals to handle such risks. With the consumer collective paying more for the product or service, insurance is included. How will these insurance policies work, considering that the object of insurance is widely unknown? Even though this appears to be a relevant question from a tort law point of view, the concept of insuring the unknown is no novelty within the insurance industry, where the entire idea of insuring risks builds on fictive scenarios. In fact, it can be assumed that risks and liability connected to AI are already regulated in a variety of insurance clauses around the world.Footnote 56
The issue of insurance is addressed in paragraphs 23–25 of the EP resolution. Regarding the uncertainties of AI risks, the EP states that “uncertainty regarding risks should not make insurance premiums prohibitively high and thereby an obstacle to research and innovation” (paragraph 24), and that “the Commission should work closely with the insurance sector to see how data and innovative models can be used to create insurance policies that offer adequate coverage for an affordable price” (paragraph 25). These statements express an openness concerning the fact that insuring AI systems is a work in progress on the market and will need to be monitored and refined in the years to come.
VII. Conclusions
This article has shown that the existing and upcoming challenges are many for legislators, businesses and individuals in the area of AI, risk and liability. Some main themes explored above concern how risk is to be defined and delimited in a legal AI context, how liability for AI systems is to be constructed at the EU level and how different societal and individual interests can be balanced in the era of AI. For the time being, it would be unrealistic to provide solutions to these challenges. They are continuously changing and the proposed regulation is yet to be finalised, as are the potential rules on liability for AI systems.Footnote 57
In line with the dynamic status of the research field, the purpose of this paper has rather been to reach a better understanding of the difficulties connected to regulating AI and risk, using the traditional perspective of tort law. Within this legal discipline at least two established figures have been identified to help fit the AI pyramid of criticality into a more familiar legal frame: the negligence assessment within fault-based liability and the concept of strict liability.Footnote 58 Both of these tort law evaluations regarding liability encompass a variety of arguments and classifications concerning risk. The dense case law and theoretical works surrounding these figures provide us with important tools to achieve a proportionate balancing act in order to make the most of the many benefits of AI systems while safeguarding fundamental rights and ensuring compensation for those who suffer losses. As with every innovation in society, new risks arise. In this case they are especially unknown and with a much broader scope and larger potential impact than the specific areas where strict liability has been imposed before. Therefore, as demonstrated in this article, we should make the most of the legal tools we are familiar with – such as the established and often effective instrument known as tort law.
Acknowledgments
Thanks to the members of the WASP-HS project “AI and the Financial Markets: Accountability and Risk Management with Legal Tools” and the group members of the sister project “AI-based RegTech”, which is also WASP-HS-funded, for valuable input at an early writing stage. The author is also grateful to the anonymous reviewer for very useful comments.
Competing interests
The author declares none.