The way ahead on AI liability issues
Will the developing EU liability framework for regulating AI prove sufficient?
20 October 2022
Reading time: 8 minutes
Much has been said about the merits and limits of the European Parliament (EP)’s civil liability regime for AI and the European Commission (EC)’s proposals for the AI Act and the AI Liability Directive. In this post, I point towards what the future may hold for AI liability regulation, considering what currently falls out of the legal framework.
Liability issues of AI are thorny and tricky, but why do they deserve targeted legislation? Rather than simple ‘out of the box’ machines, AI systems progressively gain skills from interacting with the living beings or other artificial agents inhabiting the surrounding environment. Through this prolonged, epigenetic developmental process, more complex computational structures emerge in the state-transition system of the smart machine.
From a legal perspective, this means that AI challenges traditional notions that are key to establishing who is responsible and for what service, such as foreseeability (the capacity to anticipate what a product will do or a service will offer) or negligence (failing to fulfil duties of care, which in the case of AI services may be hard to identify). More fundamentally however, AI affects pillars of legal causation (what really has caused harm?) and standards of legal protection (how do we ensure protection from it?).
In practice, people will find it difficult to demonstrate the existence of a defect in an AI service or product and, in the context of EU law, protection from damages caused by AI may be weaker than from those provoked by other technological artifacts or humans.
EU legislators have been working on addressing these issues from a moral and legal perspective over the last five years. The efforts started in 2018, when the EC set up two high-level expert groups (HLEGs) to elucidate the challenges of AI. The HLEG on the ‘ethics of AI’ released a report on ‘Trustworthy AI’ in April 2019. The HLEG on ‘liability for AI and other emerging technologies’, of which I was a member, released its report in November 2019.
Drawing on the debate at the EP, and with other EU institutions, the EC presented a report on safety and liability implications of AI, the internet of things and robotics in February 2020. Then, in October 2020, the EP presented its report on the ‘Civil liability regime for AI’. As a result of this institutional debate and corresponding initiatives, in April 2021, the EC released the proposal for an AI Act, which has the ambition to regulate high-risk AI services and products. In September 2022, the EC complemented this new set of regulations with the proposal for an AI Liability Directive.
Overall, the AI Liability Directive revolves around tort law – the domain of civil law seeking remedies, for instance in the form of monetary compensation, when a person or group of people have suffered (financial, emotional, privacy infringing, etc.) harm – but the nature of AI may eventually require that we transcend it.
In this post, I look at three legal developments that progressively show how existing approaches to AI liability have not kept abreast of technological developments, which may lead to overcoming traditional civil liability regimes tout court.
First of all, the extension of current policies of tort law to the liability issues that arise as a result of the design, manufacturing and use of AI systems shows clear limitations. Second, the collective dimension of some AI liability issues, as in the case of cyber-attacks, has already suggested updating the mechanisms of tort law, e.g. with the institution of new compensation schemes. Last, the mid- and long-term quality of the liability issues triggered by AI seem to require more future-proofed legal interventions.
For a start, scholars have usually proposed the extension of current liability doctrines – such as duties of care and theories of agency as well as procedural standards on burdens of proof and presumptions to tackle possible compensation gaps – to address the liability issues triggered by both high-risk and non-high-risk AI systems. Indeed, in its proposal for a ‘Civil liability regime for AI’, the EP assumes ‘that there is no need for a complete revision of the well-functioning liability regimes’.
However, this legal framework has already been shown to be far from adequate. The 2019 HLEG’s report on ‘liability for AI and other emerging technologies’ recommended that the burden of proving defect should be reversed under several circumstances and fall on those producing or operating AI systems, rather than on those who claim against their harms. Following this recommendation, the AI Liability Directive proposal sets up new rules that establish circumstances in which claimants have the legal right to accessing evidence for a liability case and in which the deployer of an algorithm that has caused harm is presumed to be at fault.
The extension and expansion of existing doctrines in tort law, however, may simply not be enough. Stepping away from traditional approaches in the field seems necessary especially when dealing with intricate legal chains of causes and effects in complex digital environments, or with cases of distributed responsibility, in which damages are caused by local interactions that are in themselves not illegal, but rather, morally neutral.
It is of course possible to adapt to the context of AI services and products traditional policies on strict liability and fault-based liability – when liability is imposed even without finding a specific fault or when liability is attributed to multiple actors according to their different shares of responsibility – for every human wrongdoer involved.
This is the approach of the EC’s proposal for the new directive on liability for defective products, which has been released at the same time as the AI Liability Directive proposal. By establishing that software and digital manufacturing files shall be deemed as ‘products’, the new product liability directive ensures that strict liability can apply to them.
However, this approach will not help in cases in which either no human is liable for damages, or the victim is unable to identify the person that has committed harm. These scenarios will likely multiply the ‘accountability gaps’ of AI.
Cases of hacking illustrate how current tort law rules may prove insufficient to defend tort victims of cyberattacks, due to the collective dimension of the problem. Much as occurs with the profiling of individuals, the target of cyberattacks is not any person per se, but either networks or people clustered according to certain characteristics or parameters in a complex digital environment, in which humans are often only in the loop or are not present at all.
Although this scenario is the bread and butter of cybersecurity experts, public prosecutors and policy makers, it can be difficult, if not impossible, for the individual victim of a cyberattack to identify a human perpetrator.
Scholars have therefore recommended several ways in which the law can strengthen the protection of tort victims, for example, through the implementation of compensation schemes, as stated in the HLEG’s report on ‘liability for AI and other emerging technologies’. Such schemes could be equivalent to those applicable to victims of violent crimes, who seek compensation in case no fault or responsible person can be identified.
However, the unpredictability of AI behaviour in increasingly intricate digital environments may require more radical solutions to future-proof our capacity to regulate developing products and services. Scholars have discussed whether and to what extent new legal statutes for AI may help the law tackle the accountability gaps brought forwards by these systems.
We have seen signs of this already in 2018, when the EP invited the EC to explore new forms of electronic personhood for some kinds of robots and AI systems. The EP’s resolution ignited a debate on principles, rather than an empirical assessment on the extent to which the proposal may fill the gaps of current legal frameworks vis-à-vis cases of distributed responsibility, complex chains of legal causation, etc.
On the one hand, arguments against granting AI systems legal personhood, as is the case with corporations, claim that this would become a means to shield humans from the consequences of their conduct. On the other hand, those in favour sustain that the proposal, by instituting new forms of accountability for AI systems, helps the law address liability issues of artificial, hybrid, collective and distributed digital ecosystems.
In the end, the HLEG’s 2019 report, which inspired the 2021 and 2022 proposals of the EC for the AIA and the AI Liability Act, recommended that ‘for the purposes of liability, it is not necessary to give autonomous systems a legal personality’. However, a door was left open, if not in the context of liability proper, at least in other legal regimes. The experts of the Group stressed that they ‘only look[ed] at the liability side of things and d[id] not take any kind of position on the future development of company law – whether an AI could act as a member of a board, for example’.
To sum things up, going forward we can expect AI liability issues to develop from the short to the long-term and require correspondent approaches.
In the short term, the traditional prudence of the law will be at work through analogy and by expanding current doctrines of tort law to the normative challenges of AI, e.g., the reversal of burden of proofs under certain circumstances, or, alternatively, by complementing the principles of tort law with further mechanisms of legal protection, such as the compensation schemes mentioned above.
Then, within this generation, it is likely that current accountability gaps of AI systems will require the formulation of new forms of liability for such systems. The exponential growth of AI innovation and of other data-driven technologies supports this conjecture.
Of course, the next generation of scholars will have the vantagepoint of determining retrospectively how the law managed this mounting pressure. Whereas lawmakers have traditionally tackled incremental problems of economic growth, ageing population, development of technological standards, and so on, the problems we’re confronted with by liability issues of AI, the internet of things or the internet of everything in outer space grow at an exponential rate. We should not underestimate this trend through short-term and short-sighted solutions.
For more information about the EU AI Liability Directive, including five reasons why the EU should act, an explanation of under-compensation for accidents, three policy options designed to avoid it, and an analysis of AI liability beyond traditional accident scenarios, see our expert explainer, AI liability in Europe: anticipating the EU AI Liability Directive.
Related content
AI liability in Europe
Legal context and analysis on how liability law could support a more effective legal framework for AI
People, risk and the unique requirements of AI
18 recommendations to strengthen the EU AI Act
Expert explainer: The EU AI Act proposal
A description of the significance of the EU AI Act, its scope and main points
Expert opinion: Regulating AI in Europe
Four problems and four solutions