AI regulation is one of the hot topics of today. In the EU, the European Commission and the European Parliament suggest introducing strict liability rules on operators of high-risk AI systems. To create a suitable liability regime, we must consider what makes AI systems different from their non-AI-counterparts. In our article, we identify AI’s novel approach to problem-solving and the potential for (semi-)autonomous decision-making as key issues for liability. However, the deployment of AI per se will not prove necessarily riskier than the human alternative – in contrast; they might actually be safer. Introducing strict liability is usually justified when the regulated activity poses an inherent risk despite the application of reasonable care. This stands in contradiction with the generally safer use of AI. The dangers posed by AI for liability do not necessarily coincide with cases associated with inherent riskier situations regulated by strict liability regimes. In our article, we argue that when formulating a liability regime for AI, we need to consider which aspects of AI prove particularly challenging to liability. More specifically, we need to evaluate whether introducing strict liability for specific AI systems is always appropriate, especially when taking into account that deploying AI does not necessarily pose the inherent risks usually regulated by strict liability regimes.
By Miriam Buiten & Jennifer Pullen[1]
I. OPEN QUESTIONS ON LIABILITY FOR AI
The regulation of AI is subject to intense discussion. In the EU, the proposed AI Act[2] introduces ex ante obligations for specific AI systems and provides a definition of what is to be considered high risk. Further, we expect a review of the Product Liability Directive and a proposal for EU AI liability rules. Up until now, there has been a clear tendency to regulate liability for AI using a risk-based approach: The Expert Group Report of 2019[3] considered strict liability an appropriate response for emerging digital technologies if they might typically cause significant harm. Following this approach, the European Commission, in its White Paper and accompanying Report on the safety and liability implications of AI,[4] suggests introducing a strict liability regime for operators of risky AI. The European Parliament has also spoken in favor of strict liability for AI systems that are inherently high risk or used in critical sectors.[5]
Introducing AI liability rules gives rise to a variety of questions. For example, what gaps exist in the general liability regime with respect to AI and what rules can optimally fill those gaps? We need to consider what makes AI systems unique and whether liability rules can cover these characteristics of AI. Once we have identified those gaps, we need to ask who should be liable and under what regulatory regime? If we follow a risk-based approach, we must further contemplate what high risk means and how we want to define the term for regulatory purposes. We could ask whether the definitions stated in the proposed AI Act could work as a blueprint for the liability framework or if not, whether different regulatory problems arise in the context of liability.
II. GAPS IN AI LIABILITY
With the rapid emergence of AI, questions arise whether our current liability regime can cover all damages incurred by AI systems or whether the novel features of AI will push existing liability rules to their limits. When discussing these issues, we must, on the one hand, consider what makes AI systems unique and, on the other hand, if our liability rules can internalize the particularities of AI.
First, however, we need to take a step back and establish what exactly should fall under the term “AI” – an endeavor easier said than done, as the definition of AI proves to be notoriously blurry. In its White Paper, the European Commission takes the approach of describing AI by identifying its key characteristics. The Commission considers the aspects of complexity, opacity, unpredictability, and autonomy as the defining features of AI.[6] In contrast, the proposed AI Act opts for a different delimitation of the term. It does not aim to define AI’s characteristics but refers to the underlying technologies used.[7] In its draft, the European Commission envisages a wholly comprehensive approach, according to which machine learning, expert and logic systems, as well as statistical approaches would fall under the regulation.[8] However, the broad scope of application entails risks of overregulation and uncertainty in application. Therefore, in its compromise text, the European Council proposed a narrowed delineation of the term, defining AI as systems that receive data to generate output by learning, reasoning, or modelling under a given set of human-defined objectives.[9] Whereas the Commission’s approach ensures an extensive application and thus fewer loopholes, the European Council’s proposal assures legal certainty.
With regard to a liability regime, we need a specific definition of AI. On the one hand, if we set a broad scope of application, the majority of systems caught by the regulation would not necessarily pose a problem for existing liability rules. On the other hand, for systems that prove incompatible with the current liability regime, an unambiguous definition will be essential to avoid litigation around the question of what liability framework applies. When discussing AI liability, identifying the cruxes of AI for current liability rules becomes crucial. The challenges of AI for liability will not only indicate where current regulation might fail but also set boundaries to where regulatory actions might not be needed. Defining the problems of AI for liability differs from defining AI as a phenomenon in itself. To set an appropriate scope of application of AI liability rules, we thus need to consider the key aspects of AI that could potentially pose liability problems.
For current liability rules, AI proves to be problematic in two distinctive ways: First, AI follows a unique method of problem-solving that distinguishes itself fundamentally from human decision-making. This difference is not bad per se as the approach promises to save time and resources, leading to better (or at least more efficient) decisions. However, this improvement comes at a price, as decisions made by AI become less predictable and understandable, making human oversight more difficult in the process. Second, complex AI systems will increasingly act autonomously, at least to a certain degree. Highly autonomous systems cause a shift in control. It becomes unclear who should be responsible, and under which circumstances a human supervisor should intervene.[10] We need to ask whether monitoring obligations should be imposed on operators of AI systems – for instance, if doctors should be obliged to override a faulty diagnosis by AI. We need to consider how such an obligation can be designed so that it does not deprive AI of one of its significant benefits, namely allowing people to delegate tasks to it. In that regard, the distinction between autonomy and automation becomes particularly relevant. While automatic systems carry out predetermined processes, an autonomous system makes independent and free decisions.[11] Only (semi-)autonomous systems create concerns regarding the allocation of liability: Purely automated systems are pre-programmed and, hence, subject to human responsibility.[12] In particular, the autonomy and unpredictability of AI systems challenge our current liability rules in various ways: First, it is unclear how we can establish faulty behavior on the part of people operating AI systems if the system’s actions cannot be reasonably anticipated. Secondly, proving causality becomes increasingly tricky as the AI’s outputs become less traceable. Thirdly, it remains questionable how to distribute responsibility between operators and manufacturers or other stakeholders for autonomous systems.[13]
When analyzing the liability issues posed by AI, it becomes evident that the identified characteristics essentially boil down to one technology – machine learning algorithms.[14] Therefore, a proposal would be to link the scope of application to machine learning algorithms instead of carrying out the tricky task of defining AI.[15] The scope of regulation for machine learning algorithms would offer legal certainty as the term is narrowly defined while still incorporating the challenges arising from AI for current liability rules.[16]
III. HIGH-RISK AI
Regulating AI without hampering its development proves to be challenging. The EU has attempted to strike a compromise by adopting a risk-based approach. It proposes a strict liability regime for high-risk AI.[17] It suggests banning AI systems that pose specific unacceptable risks and allowing the use of certain high-risk AI applications only under the fulfilment of particular safety requirements. A risk-based approach inevitably leads to the issue of defining risk. The AI Act gives guidance on the concept of high-risk systems. In its proposal, the European Commission differs between prohibited AI,[18] high-risk AI[19] and limited-risk AI[20] – the latter only being subject to light transparency obligations. According to the AI Act, AI is to be classified as high-risk either by being part of a product required to undergo third-party conformity assessments covered by Union harmonization legislation listed in Annex II or if the area in which AI is applied is considered risky, as listed in Annex III of the proposal. In its Compromise Text, the European Council follows the structure of the Commission’s proposal. Still, it provides more details on what is to be defined as high-risk according to Annex III of the proposal and adds social scoring to the prohibited uses of AI.
The classification offered in the proposed AI Act could be used as a blueprint for future liability rules. In particular, the proposal indicates what AI systems might justify introducing strict liability. However, we need to consider that the AI Act serves a different purpose than liability law. While the AI Act acts as an ex ante regulatory tool, liability rules only take effect ex post and after the damage has occurred. In blunt terms, it applies once ex ante regulation has failed. Defining risk for liability rules, hence, might differ from specifying principles for market approval. High-risk in the meaning of the AI Act does not necessarily coincide with the problems identified for liability. Specifically, the proposed AI act does not address the challenges of AI to liability identified above, related to its novel approach to problem-solving and the potential for (semi-)autonomous decision-making. To adequately address the issues AI poses for liability, we, therefore, may need to conceptualize high risk in a different way.
IV. WHO SHOULD BE LIABLE?
As previously mentioned, AI systems disrupt the allocation of responsibility between manufacturers and operators. Manufacturers could argue that they are not liable because their product is not defective, and that the AI system simply acted (semi-)autonomously as intended. Operators could bring forward that they are not at fault, as the AI system was supposed to act without their supervision. Thus, the injured party might end up having to carry the damage.
Liability rules should be drafted to prevent a gap in liability between the two stakeholders. Whereas it is safe to say that manufacturers will be, at least to some degree, responsible for their AI systems, there are multiple reasons also to hold operators accountable.[21] For one, making operators liable for their AI systems encourages them to take precautions. Operators will be incentivized to implement monitoring measures when deploying semi-autonomous AI systems with appropriate liability rules in force. For highly autonomous AI systems, liability further provides an incentive for operators to keep their systems up to date and ensure that they are correctly used. Moreover, operators tend to benefit from using AI, so it only seems appropriate for them to bear some of the associated costs. Nevertheless, as discussed below, AI systems may also produce desired societal benefits, so it should not be made overly unattractive for operators to use AI systems. Under standard fault liability for AI operators, injured parties may face significant hurdles in obtaining compensation. Therefore, changes to the standard or burden of proof for claimants in cases of AI harm are justified. At the same time, we must be careful not to bite off too much, creating chilling effects on AI adoption in the process.
V. WHAT REGIME AND ON WHAT REGULATORY LEVEL?[22]
For manufacturers, the EU Product Liability Directive foresees a strict liability regime.[23] Nevertheless, the rise of AI challenges the implementation of the Directive in various ways. First, it is debated whether software is to be considered a product within the meaning of the Directive as standalone software typically lacks tangibility. Once integrated with hardware, it may further become tricky to distinguish between products and services for AI systems clearly. Secondly, the interpretation of the term defect might need some adjustment. More specifically, we need to contemplate what expectations users are entitled to have for AI and what should be considered defective concerning autonomous AI systems. Moreover, proving a defect may prove complicated for consumers due to AI’s somewhat unpredictable and opaque features.[24] Hence, an adaptation to the burden of proof could be discussed as this would give incentive to manufacturers to build their AI systems in a comprehensible manner.
For operators of AI, on the other hand, national – usually fault-based – liability rules currently apply. However, in its White Paper, the European Commission proposes a horizontal strict liability regime for high-risk AI. Introducing strict liability is justified when the regulated activity poses an inherent risk despite reasonable care by operators. With strict liability, the optimal degree of care does not need to be evaluated as all costs of the accident are shifted to the tortfeasor inducing him to take precautions. As risky activity will likely lead to harm, even under the application of reasonable care, strict liability helps internalize these unavoidable negative externalities. Further, the regime can generate an optimal activity level by incentivizing individuals to refrain from risky actions due to looming liability.[25] Therefore, inflicting strict liability rules on high-risk AI systems seems like a good starting point as strict liability can help cover certain inevitable risks.
However, enforcing strict liability can turn out to be a double-edged sword. Activities with inherent risks still may produce desired societal benefits. Strict liability regimes could cause tortfeasors to become too careful. While the costs of harm are internalized through strict liability, positive effects on society may get lost as individuals do not reap sufficient immediate benefits and, hence, decide that risking liability is not worth it. With AI, it is clear that its deployment can be highly beneficial to society. Autonomous cars are likely safer than those driven by humans, while AI diagnostic tools may detect diseases quicker than human doctors. Whereas ensuring compensation for damage incurred by AI is necessary, we still need to keep in mind that not using AI will result in opportunity costs.[26] Further, there is a concern that strict liability regimes might obstruct innovative efforts within the field of AI. However, it is debatable whether this is necessarily the case.[27]
In general, we need to ask whether AI inhibits a higher risk than its non-AI-counterparts that would justify subjecting specifically these systems to strict liability rules. We need to consider that to do without AI often means relying on human, and possibly less safe, solutions. In various areas, deploying AI may prove less risky. The problem with AI is not that its application is risky per se but that its results are less predictable, and control is shifted away from human manufacturers and operators. The problem is that AI’s actions are not wholly foreseeable or controllable. The risks posed by AI for liability do not necessarily coincide with cases associated with inherent riskier situations regulated by strict liability regimes. However, strict liability does offer a solution for one particular issue with AI, namely the difficulty of assigning responsibility. With strict liability, we define a clear culprit so that there is no risk of damage remaining with the injured party.
The main issue with AI and liability lies in the fact that injured parties might not be able to claim damages as, above all things, it might be challenging to prove whether there is a link between the harm incurred and the AI’s actions. While a strict liability regime helps assign responsibility, it does not solve the issue of establishing causality. Cases involving AI show similarities to constellations of cases involving liability for third parties, as we have, for example, in animal owners’ liability. Evidently, using respective national liability regimes as a blueprint might prove a reasonable approach to formulating AI liability rules. In general, we must prevent an excessive burden on AI operators as we do not want to chill the use of AI beneficial to society. It will be essential to work with appropriate and effective exoneration reasons. While the onus still will lie with the operator, exoneration possibilities factually tone down a potentially excessive liability regime.
Furthermore, we need to consider that most problematic cases will likely already be covered by sector-specific regulation – for instance, in the areas of transportation and medical devices. We must contemplate whether harmonizing liability for AI is at all needed. On the one hand, harmonized liability rules ensure the same level of protection for all users and a level playing field for operators in Europe. On the other hand, sector-specific regulation may already offer sufficient protection against AI liability risks or might be the best place to add liability rules tailored to the specific sector. Diverse Member State laws further allow observing which liability rules prove suitable and would, additionally, preserve the internal coherence of the national liability regimes. Lastly, a harmonized EU liability framework does not necessarily provide for the unified application of the law. Liability rules are still subject to interpretation by national courts as well as to national procedural rules.[28] In sum, we need to question whether the benefits of introducing a harmonized liability regime on EU level ultimately outweigh its drawbacks.
VI. TRANSPARENCY AS A SOLUTION?
The opacity of AI poses a challenge for forming a functioning and purposeful liability system, as the ambiguity of AI makes it difficult to identify and prove possible violations of laws. Hailed as a solution against opaque AI, regulatory bodies are urging for transparent AI systems. The European Commission’s White Paper and the European Parliament’s Report on a Framework for AI raise the issue of non-transparent AI. In its subsequent legislative proposal for ex ante regulation, the European Commission calls for high-risk AI to be transparent.[29] Further, the proposed AI Act requires providers of specific systems to inform users of the use of AI if the system recognizes emotions or membership of (social) categories based on biometric data, or generates or manipulates content.[30] While the importance of transparency becomes undoubtedly clear, it remains vague as to what actually is meant with transparent AI.
From the perspective of liability, the idea is that higher transparency can help victims evidence harm, as transparent AI should prove more traceable. Yet, it is questionable whether setting requirements for transparent AI is to be considered an antidote against liability issues. With regard to algorithmic decisions made by AI, transparency primarily refers to the possibility of understanding how certain factors affect the result in a specific case.[31] In concrete terms, the algorithm’s decision-making process is influenced by the training data and testing procedure as well as the actual data used (input) and the system’s decision model (output).[32] If AI is to be truthfully transparent, each of these steps must be made comprehensible. Further, for transparency to be practical, its implementation would need to bring about a feasible and useful explanation. If programmers or producers are unable to comply with stated transparency requirements, their enforcement becomes, of course, unfeasible. Moreover, if the required transparency does not ensure sufficient information to plaintiffs, defendants and courts in legal cases, its assertion becomes useless.[33] Therefore, we must consider what degree of transparency proves possible and helpful.
It is essential to bear in mind that transparency requirements and liability regimes are intertwined. The principle of transparency cannot serve a self-purpose as third parties should be able to react to the information disclosed. Transparency aims to create comprehensibility so that people confronted with algorithmic decisions know whether and in what manner they have been affected by AI. More specifically, the degree of required transparency depends on the conditions for liability and on which party has the burden of proof.[34] Further, there will likely be a trade-off between transparent and more accurate AI. We need to ask ourselves whether we are willing to hold back innovation and development in AI for the sake of transparency in civil liability cases.
VII. OUTLOOK
We are still eagerly awaiting proposals for new EU rules on AI liability. In general, there are some issues to solve: For one, we need to attribute the responsibility for AI systems that function (semi-)autonomously between manufacturers and operators. This will prove relatively straightforward in some instances – as for example, for product liability. However, as we established, it makes sense also to hold operators liable when they deploy AI systems. Creating suitable liability rules for AI operators turns out to be trickier. Moreover, in the advent of increasingly complex AI systems proving fault and causality becomes more and more difficult.
One solution would be to introduce a strict liability regime for certain types of AI. Strict liability would have the advantages of facilitating the allocation of responsibility between different stakeholders as well as enabling easier enforcement. Further, the liability regime would help reduce the activity level in high-risk sectors. However, strict liability could conversely hamper AI adoption, which proves particularly problematic in that AI systems may be considerably safer than their non-AI counterparts. We need to consider whether introducing strict liability still is appropriate if the risk in question is, in fact, reduced. Put differently; we might even have to ask whether these cases remain high risk once AI is involved. Another problematic aspect of strict liability for high-risk AI lies within defining the appropriate scope. We need to evaluate what actually is meant with high-risk AI and whether high-risk AI systems are not already subject to sector-specific regulation. If a harmonized liability regime is introduced, it will further be important to consider appropriate and effective exoneration reasons to tone down the possibly harmful effects of liability.
Overall, we should bear in mind that additional liability rules should fill the gaps existing in our current liability law regimes. The EU has impressively been ahead of the curve with its regulation proposals. While this is important in some contexts, for example, concerning the regulation of facial recognition in public areas, it still might prove too early in other sectors. For instance, we still lack AI consumer products that act in a truly autonomous manner. Of course, it is close to impossible to pinpoint the right time for regulatory intervention. Still, it might be a reasonable approach to wait until all concrete issues are fully identified. In the end, liability rules are one piece of the bigger regulatory puzzle. Ex ante obligations and ex post liability rules complement one another. Therefore, the proposed AI Act could help take away some concerns regarding risky AI. In general, it might be worth considering whether introducing strict liability for specific AI systems is always appropriate – especially when considering that the risks posed by AI for liability do not necessarily coincide with inherent riskier situations usually regulated by strict liability regimes.
[1] Respectively, Assistant Professor of Law and Economics and Research Assistant, University of St. Gallen.
[2] Proposal for a Regulation of the European Parliament and of the Council laying down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and amending certain Union Legislative, Acts COM(2021) 206 final (the “AI Act”).
[3] Expert Group on Liability and New Technologies New Technologies Formation, Liability For Artificial Intelligence And Other Emerging Digital Technologies (2019).
[4] Communication White Paper of 19 February 2020 on Artificial Intelligence – A European approach to excellence and trust, COM(2020) 65 and Commission Report of 19 February 2020 on the safety and liability implications of Artificial Intelligence, the Internet of Things and robotics, COM(2020) 64.
[5] European Parliament resolution of 20 October 2020 with recommendations to the Commission on a civil liability regime for artificial intelligence (2020/2014(INL)); European Parliament draft report of 02.11.2021on artificial intelligence in a digital age (2020/2266(INI)).
[6] European Commission White Paper on AI, p. 12.
[7] See also Buiten, M. (2019). Towards intelligent regulation of Artificial Intelligence, Eur. J. Risk Regul., 10(1), pp. 41-59.
[8] AI Act, Article 3(1) as well as Annex I of the proposal.
[9] Proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union legislative acts (Presidency compromise text), 2021/0106(COD).
[10] Buiten, M., de Streel, A. & Peitz, M (2021). EU liability rules for the age of AI, CERRE Report, available under https://cerre.eu/publications/.
[11] For more on the distinction between autonomy and automation, see Parasuraman, R., Sheridan, T.B., & Wickens, C.D. (2000). A model for types and levels of human interaction with automation, IEEE Transactions on Systems, Man, and Cybernetics – Part A: Systems and Humans, 30(3), pp. 286-97.
[12] Buiten, M. (2021). Chancen und Grenzen “erklärbarer Algorithmen” im Rahmen von Haftungsprozessen (S. 149-175) in Zimmer, D. (ed), Regulierung für Algorithmen und Künstliche Intelligenz – Tagung an der Universität Bonn am 7. und 8. September, Baden-Baden: Nomos (in German).
[13] Buiten, de Streel, & Peitz (2021), p. 35.
[14] Machine learning algorithms recognize different patterns from a data set. This ultimately results in different principles of experience, which in turn the algorithm develops further. Machine learning applications thus learn independently and can, under given conditions, also make autonomous decisions (Mitchell, T. (1997). Machine Learning, New York: McGraw-Hill).
[15] Ebers, M. (2020). Regulating AI and Robotics: Ethical and Legal Challenges in Ebers, M., & Navas, S. (eds.), Algorithms and Law (pp. 37-99), Cambridge: Cambridge University Press.
[16] Buiten (2019); or Hacker, P. (2020). Europäische und nationale Regulierung von Künstlicher Intelligenz, NJW 2142 (in German).
[17] See Expert Group Report on AI, European Commission White Paper on AI, and European Parliament Resolutions on AI.
[18] AI Act, Article 5.
[19] AI Act, Articles 6 et seq.
[20] AI Act, Article 52.
[21] Buiten, de Streel, & Peitz (2021), pp. 56 et seq.
[22] The following explanations are based on Buiten, de Streel, & Peitz (2021).
[23] Council Directive 85/374/EEC of 25 July 1985 on the approximation of the laws, regulations and administrative provisions of the Member States concerning liability for defective products.
[24] See further ELI Guiding Principles for Updating the Product Liability Directive for the Digital Age of January 2021, and Buiten, de Streel, & Peitz (2021), pp. 49 et seq.
[25] Buiten, de Streel, & Peitz (2021), pp. 40 et seq.
[26] See Belfield, H., Hernández-Orallo, J., Ó hÉigeartaigh, S., Maas, M. M., Hagerty, A., & Whittlestone, J. (2020). Consultation on the White Paper on AI: a European approach. Report by the Centre for the Study of Existential Risk.
[27] See for example, Galasso, A., & Luo, H. (2018). When does Product Liability risk chill Innovation? Evidence from Medical Implants, NBER Working Paper Series (No. w25068).
[28] Buiten, de Streel, & Peitz (2021), pp. 59 et seq.
[29] AI Act, Article 13.
[30] AI Act, Article 52.
[31] See for example Ananny, M., & Crawford, K. (2018). Seeing Without Knowing: Limitations of the Transparency Ideal and Its Application to Algorithmic Accountability, New Media Soc, pp- 1-17.
[32] For more see Buiten (2019), pp. 50 et seq.
[33] Buiten (2019), pp. 53 et seq.
[34] Buiten (2021).