LawTeacher.com

Liability for AI-driven harm: product liability and negligence in the age of AI

May 8, 2025

Robot driving a car

Artificial intelligence (AI) systems are increasingly capable of autonomous decision-making and are being deployed in diverse domains from transportation to healthcare. While these technologies promise great benefits, they also pose novel risks of damage or injury when things go wrong. This raises a pressing question in English law: who bears responsibility when an AI system causes harm? Unlike a conventional tool, an AI can act in unpredictable ways without direct human control, complicating the application of traditional legal doctrines.

English law has long addressed harm caused by products or human actions through two primary tort frameworks – the fault-based tort of negligence and the strict liability regime for defective products under the Consumer Protection Act 1987 (CPA 1987). In the age of AI, these existing frameworks are being tested and, in some cases, augmented by new legislation. This essay examines how liability for AI-driven harm is allocated under English law. It focuses on how negligence and product liability doctrines apply to AI developers, manufacturers, and users. The analysis considers recent legislative developments – notably the Automated and Electric Vehicles Act 2018 (AEVA 2018) and the new Automated Vehicles Act 2024 – as well as real-world examples of AI-related accidents. While the emphasis is on England, reference will be made to Scotland and Northern Ireland where relevant (Scots law, for example, shares similar principles through the law of delict).

AI Systems and the Challenge of Legal Responsibility

AI systems can be defined as machines or software capable of performing tasks that normally require human intelligence, often through algorithms that learn and adapt (machine learning). A distinguishing feature of advanced AI is a degree of autonomy: AI can make decisions or recommendations that are not directly pre-programmed for every scenario. This creates a challenge for legal responsibility. If an autonomous vehicle swerves and causes a crash, or a medical diagnosis AI recommends a harmful treatment, there is no human act in the immediate decision. Yet the law must find someone accountable after the event. AI itself, not being a legal person, cannot be sued or held liable. Responsibility must therefore be traced back to the people or companies behind the AI – typically the developers or manufacturers who created it, and the operators or users who deploy it.

The key issue is that AI’s autonomous and adaptive qualities may make it harder to apply traditional fault-based concepts. Negligence liability usually hinges on proving a person failed to exercise reasonable care. With AI, harmful outcomes might occur even when all humans involved exercised care, simply because the AI learned an unexpected behavior or encountered a novel situation. This raises the spectre of an “accountability gap” where victims could find it difficult to prove any human negligence.

On the other hand, strict product liability might seem more apt – if an AI product is “defective” and causes injury, the manufacturer can be liable without proof of fault under the CPA 1987. However, even product liability faces challenges with AI, such as whether software is considered a “product” and how to establish a defect in a complex, evolving algorithm. These difficulties have led to debates about whether existing laws are sufficient. Some scholars argue that new liability regimes may be needed for AI – for example, proposals for no-fault compensation schemes or extending strict liability to all AI systems. Nonetheless, as of 2025, English law largely relies on adapting negligence and product liability principles to AI scenarios, with targeted legislative interventions in specific areas like autonomous vehicles.

Negligence Liability for AI-Caused Harm

Negligence is the primary fault-based tort in English law for unintentional harm. To succeed in negligence, a claimant must prove the defendant owed a duty of care, breached that duty by failing to meet the standard of a reasonable person, and caused the claimant’s damage as a result. In the context of AI, potential defendants include AI developers or manufacturers (who design, program, or build the system) and AI operators or users (who deploy or interact with the system). The challenge is fitting the autonomous actions of AI into the usual negligence framework, which presupposes human acts or omissions.

Duty of Care: Manufacturers and developers of AI technology are likely to be found to owe a duty of care to those foreseeably affected by their products. This is analogous to the duty established in Donoghue v Stevenson [1932] AC 562 (HL), where a manufacturer was held to owe a duty to consumers to avoid foreseeable harm from defective products. By extension, an AI developer or producer owes a duty to users and bystanders to design and program the AI with reasonable care to avoid causing physical injury or property damage. This has not yet been tested in English courts with AI specifically, but the general principle is well established. For example, if a company manufactures an autonomous delivery robot, it owes a duty to pedestrians and customers who could be harmed by a malfunction of that robot. Similarly, professionals deploying AI in their practice (e.g. a hospital using an AI diagnostic tool) owe duties of care to their patients in how they select and use such tools.

Duties of care in novel scenarios are sometimes evaluated using the three-stage test from Caparo Industries plc v Dickman [1990] 2 AC 605: foreseeability of damage, a relationship of proximity, and whether it is fair, just and reasonable to impose a duty. It is foreseeable that a negligently designed AI could cause harm, and there is proximity between the developer and users or third parties who rely on or are exposed to the AI. Imposing a duty incentivises safe design without unduly burdening innovation – indeed, ensuring product safety is generally deemed fair and reasonable. Thus, it is likely English courts would recognise duties of care on AI creators analogous to those on manufacturers of traditional products. A duty may also be recognised for those who deploy or operate AI in situations where they can prevent harm. For instance, a company implementing an AI-driven process in a factory owes a duty to employees and visitors to configure and monitor it safely.

Breach of Duty (Standard of Care): Once a duty is established, the next question is whether the defendant met the standard of care – essentially, did they act as a reasonable and prudent person in their role would have? For AI developers, the standard would be that of a reasonably competent professional in software design under the circumstances. Breach might be shown by evidence of inadequate testing, failing to program safety features, or ignoring known risks in the AI’s design. Determining breach in AI cases can be complex. AI systems, especially those using machine learning, can behave in ways their creators did not fully anticipate. If a particular harm was not foreseeable to a reasonable developer at the time (for example, an AI navigation system making a bizarre error only in a rare scenario), then failing to prevent that might not amount to negligence – the law does not demand omniscience. The flipside is that as AI developers become aware of potential failure modes (say, an autonomous car’s inability to detect a certain obstacle), the standard of care would require addressing that risk.

One illustration is the scenario of semi-autonomous driving systems. If a carmaker’s AI driving system has trouble recognising pedestrians in certain lighting, a reasonable manufacturer should know and warn about or fix this. If they did not, a court could find a breach of duty if a pedestrian is later injured because of that flaw. Notably, proving breach may require technical evidence about what the developer knew or should have known – highlighting the role of industry standards and guidelines for AI safety. As AI technology matures, what counts as “reasonable” will evolve. There is concern that applying the traditional reasonableness test to AI might be unpredictable; what seems reasonable to a judge or jury not versed in AI could differ from industry views. This uncertainty is one reason some commentators favor strict liability approaches for AI-caused harm, but negligence law can adapt by using expert evidence on proper software engineering practices.

For users and operators of AI, breach of duty would mean the person failed to act as a reasonable user in controlling or overseeing the AI. For example, a human driver required to supervise a Level 3 autonomous vehicle (one that can drive itself under some conditions but expects human fallback) could be negligent if they entirely stopped paying attention contrary to instructions. If an accident occurs because the human did not intervene when the system malfunctioned, the human operator may be found partly liable for breaching their duty to monitor. This was tragically illustrated in the 2018 Uber self-driving car incident in Arizona (USA), where the vehicle’s safety driver was streaming a video on her phone instead of watching the road; as a result, she failed to brake when the AI system did not recognise a pedestrian, leading to a fatal collision.

In a UK context, such a safety operator could be liable for negligence to the victim (and indeed the Uber safety driver faced criminal negligence charges in the US). Conversely, if an AI is truly intended to operate without human input, it may be that no user breach is present because the user had no role at the time – English law is beginning to acknowledge this by shifting responsibility away from passengers in fully self-driving cars, as discussed below.

Causation and Remoteness: The claimant must also link the breach to the harm (factual and legal causation). In AI cases, causation can be contentious. Suppose an AI medical system suggests a wrong dose and a doctor follows it, harming a patient. If the developer breached a duty by providing a flawed algorithm, is that a cause of the injury? Yes, but the doctor’s role (and possibly the patient’s underlying condition) complicates causation. The law would ask whether the harm would have occurred “but for” the defendant’s negligence, and whether any intervening act broke the chain of causation. In our example, if a diligent doctor should have caught the AI’s error, a court might view the doctor’s failure as an intervening cause, potentially absolving the developer.

Alternatively, if the AI’s recommendation was so authoritative or the error so hidden that even a careful doctor would not realise, the developer’s fault remains a direct cause. As this suggests, AI-related harm often involves multiple actors (developers, deployers, users), so courts might apportion liability or find one party primarily responsible depending on their contributions to the outcome. It is also necessary that the kind of damage was reasonably foreseeable. With AI’s unpredictability, defendants might argue that the exact sequence of events was not foreseeable. However, the law only requires general foreseeability of harm. For instance, it is foreseeable that a delivery drone might crash and injure someone if not properly designed or operated, even if the precise software bug was unforeseeable. Thus, developers cannot escape liability simply because the AI’s manner of failing was novel, if the risk of injury from a malfunction was foreseeable.

Defences in Negligence: A defendant might avoid liability if they can establish a defence such as contributory negligence or assumption of risk. In AI contexts, a manufacturer could claim the user misused the AI or ignored safety instructions, contributing to the harm. For example, if a car’s manual required the driver to remain alert and keep hands on the wheel, but the driver was reading a book when the AI misjudged a situation, a court might reduce the driver’s damages for contributory negligence. Similarly, if someone knowingly lets an AI perform a task in a clearly unsafe manner (contrary to provided warnings), a defendant could argue the victim voluntarily assumed the risk. These principles apply normally, though the novelty of AI might lead to factual disputes about what risks were known or obvious.

In summary, negligence law provides a mechanism to hold AI developers and users responsible when their lack of care leads to injury. An AI developer can be liable if they failed to act with the competence and foresight expected in designing the AI, and an operator can be liable for mishandling or mis-monitoring an AI. The flexibility of negligence is that it can, in theory, adapt to many scenarios. However, its weakness in the AI context is the difficulty for claimants to prove a specific fault in complex technology. As noted in one commentary, requiring the claimant to pinpoint how an AI failed and what the defendant did wrong can be an “onerous burden” when the internal workings of the algorithm are opaque. This evidentiary challenge is where product liability may play a crucial role.

Strict Product Liability Under the Consumer Protection Act 1987

When an AI system or robot causes harm due to some defect, the injured party’s most powerful tool may be the Consumer Protection Act 1987. Part I of the CPA 1987, which implements the EU Product Liability Directive 85/374/EEC in UK law, imposes strict liability on producers for damage caused by a defective product. “Strict liability” means the claimant does not need to prove negligence or fault – only that the product was defective and caused the damage. This regime is well suited to situations where something goes wrong with a complex product, because the burden to prove specific wrongdoing by the manufacturer is removed. Instead, the focus is on the product’s safety.

Under the CPA 1987, a “product” is “any goods or electricity” and includes products comprised within another product (e.g. a component) (CPA 1987, s.1(2)). The Act has been traditionally applied to tangible consumer goods – for example, medicines, vehicles, or appliances – but it is unsettled how it applies to software. If an AI system is embodied in hardware (say, a domestic robot or an autonomous car), that hardware is a product and its software likely counts as part of the product. However, if AI is delivered purely as software or a digital service (for instance, a machine-learning algorithm provided via the cloud), it becomes debatable whether the CPA applies. The Act does not explicitly mention software, and courts have not yet decided if stand-alone software is a “product” for these purposes. Legal commentators note this ambiguity could lead to arbitrary outcomes. For example, the same AI algorithm might be subject to strict liability if sold embedded in a device, but not if provided as an online service. The Law Commission in 2021 identified this as an area ripe for reform – suggesting that the CPA might be extended to cover software and AI-driven technology – but that review has been put on hold. Thus, for now, whether a given AI system falls under the CPA may depend on its form: AI integrated in physical products (like vehicles or robots) is likely covered, whereas pure software AI tools might be treated as services, leaving claimants to pursue negligence unless future case law or legislation clarifies the issue.

A product is “defective” under the CPA if its safety is “not such as persons generally are entitled to expect” (CPA 1987, s.3). This consumer-expectation test looks at what the public is entitled to expect in terms of safety, considering all the circumstances, including how the product was marketed, any instructions or warnings, and what might reasonably be expected to be done with the product. Applied to AI, this means a court would ask: did the AI system meet the level of safety that people generally are entitled to expect from such a product? If an autonomous home assistant robot suddenly causes a fire or a cleaning robot aggressively knocks someone down the stairs, one would argue that consumers did not expect that outcome – indicating a defect in safety. Importantly, under this test the focus is on the product’s performance, not the manufacturer’s conduct. So even if the manufacturer took reasonable care (no negligence), the product can still be found defective if it fails the safety expectation standard.

However, the expectation of safety can be calibrated by how the product is presented. For instance, if a semi-autonomous car is sold with clear warnings that it is not fully self-driving and requires driver supervision at all times, a court might find that the public’s expected safety includes the need for human intervention. In a case where a driver ignored those warnings and an accident occurred, the product (the car and its AI) might not be deemed defective given the warning – the failure lay in misuse by the user. On the other hand, if an “autonomous” vehicle is advertised as capable of safely driving itself, the public is entitled to expect it will do so at least as well as a competent human driver. If it then causes an accident under conditions it should handle, that expectation is breached. Indeed, the UK’s new Automated Vehicles Act 2024 sets a “safety ambition” that authorised self-driving cars must be as safe as or safer than human drivers. Should an autonomous vehicle fall short of that and injure someone, it could well be deemed defective under the CPA standard.

There have been no reported English cases yet on AI product defects, but analogies can be drawn from existing product liability case law. In A v National Blood Authority [2001] 3 All ER 289, a case under the CPA, blood supplies were held defective because they were infected with hepatitis – consumers expected clean blood, even though the producers argued the risk was unavoidable at the time. The court prioritised consumer expectation of safety over the fact that the producers were not negligent. By analogy, if an AI has an unknown flaw that can cause harm, it might still be deemed defective from the consumer’s perspective, even if the state of scientific knowledge did not allow the producer to foresee or prevent it. (Notably, producers have a specific defence for development risks, discussed below.)

In Wilkes v DePuy International Ltd [2016] EWHC 3096 (QB) and Gee v DePuy International Ltd [2018] EWHC 1208 (QB), concerning allegedly defective hip implants, the courts took into account the benefits and risks of the product when assessing defectiveness. They indicated that the level of safety the public is entitled to expect can involve a risk-benefit balance. Applied to AI, if an autonomous system offers substantial benefits but carries a small risk of harm, a court might find that some level of risk was accepted by society and not label the product defective merely because that risk materialised in one case. For example, if a self-driving car statistically reduces accidents overall but has a particular quirk that in one-in-a-million situations causes an accident, a court might be reluctant to call it “defective” if that risk is considered tolerable in exchange for the benefit – especially if the risk was disclosed. On the other hand, if the harm arises from a hazard that could have been designed out with minimal downside, then failing to do so might render the product defective.

Under the CPA, the persons who can be held liable are the “producer” (typically the manufacturer), anyone who holds themselves out as a producer (own-branders), importers of the product into the UK, and suppliers if they fail to identify who made or imported the product (CPA 1987, s.2). This generally means the AI’s manufacturer (or the company behind the AI software, if that software is deemed part of the product) will be the target of a CPA claim. One issue is that some actors like software developers or online platforms might not fit neatly into these categories. For instance, a software company providing AI updates to a car might or might not be a “producer” depending on how the update is delivered. If the AI is a service, the CPA does not cover service providers at all. As noted, this leaves a potential gap for digital AI solutions deployed via cloud or algorithms offered on a subscription basis.

A key feature of CPA liability is that it is subject to certain defences (CPA 1987, s.4). Of particular relevance to AI is the “development risks” defence (also known as the state-of-the-art defence). A producer is not liable if they prove that the defect was one that no one could discover given the scientific and technical knowledge at the time the product was put into circulation. This defence could be invoked by AI producers in cases where an AI’s harmful failure was truly unpredictable with current knowledge. For example, if an AI-driven robot behaved dangerously due to an emergent property of its learning algorithm that experts had never observed or theorised before, the manufacturer might argue that the risk was undiscoverable and thus claim the development risks defence.

This was a contentious provision when the EU Directive was enacted, meant to protect innovation from liability for unknown risks. Claimants and commentators have criticised this defence as potentially undermining consumer protection. The EU’s recent proposal to reform product liability law in the digital age even considers removing or limiting this defence for certain high-tech products, reflecting concern that it could exempt AI producers too easily. Under current English law, however, the defence remains available. If successfully raised, a claimant might then have to fall back on a negligence claim – which, as noted, would also be difficult if the risk was unforeseeable.

Another CPA limitation is the ten-year long-stop: a claim is barred if brought more than ten years from the date the product was first put into circulation (CPA 1987, s.11A). This could be significant for AI, because software-based systems might still be in use or receiving updates more than a decade after release. If an old AI product causes harm after the ten-year window, the injured party can no longer use the CPA’s strict liability route and would have to prove negligence instead. Given that AI software might evolve or be maintained over time, identifying when the “product” was put into circulation could itself raise questions (is it the sale of the original software or the date of the latest update that introduced the defect?). These nuances remain to be tested.

Despite these complexities, the CPA 1987 is seen as a crucial avenue for dealing with AI accidents. It “overcomes considerable difficulties presented by AI-driven technologies for negligence law” by alleviating the need for a claimant to evidence fault. In other words, if an AI-powered device causes injury, the claimant can allege the device was defective and thereby shift focus onto the product’s safety rather than having to unravel the AI’s decision process or the developer’s conduct. This also bypasses any privity of contract issues, allowing a bystander injured by an AI to sue the manufacturer directly even though they had no contract with them.

In summary, under the CPA 1987, AI manufacturers and others in the supply chain can be held liable if an AI product is defective and causes harm, regardless of whether they were negligent. This is subject to defences like development risks and to the fundamental question of whether the AI system is considered a “product” under the Act. The strict liability approach aligns with the idea that those who introduce potentially risky technology into society should bear the cost of harm caused by defects in that technology, rather than making the injured party prove fault. English law currently relies on this regime (alongside negligence) to address AI-driven harm, while debates continue about updating the definition of “product” and other aspects to better fit AI and digital tech.

Automated Vehicles: A Case Study in AI Liability and Legislation

One of the most prominent areas where AI-driven systems pose injury risks is autonomous vehicles. Self-driving cars use AI to perform the driving task, raising the classic question: if an autonomous car causes an accident, who is liable – the “driver” (who might have been hands-off), the car manufacturer, the software developer, or someone else? Anticipating these issues, the UK has been proactive in legislating for autonomous vehicle (AV) liability. This provides a useful case study of how traditional liability is being supplemented in the AI age.

The Automated and Electric Vehicles Act 2018 (AEVA 2018) was an early piece of legislation addressing autonomous cars. Under AEVA 2018, when a vehicle that is listed as “automated” by the Secretary of State is driving itself on a road and an accident occurs, the injured party can bring a claim against the vehicle’s insurer for the damage. In effect, the insurer steps into the shoes of the “driver” for an autonomous vehicle incident, even if the human user was not in control. The insurer can then, in turn, recover any payouts from the responsible party, which could be the vehicle manufacturer or another at-fault entity, through existing laws (either negligence or product liability). This scheme ensures that victims are compensated without having to untangle the complex question of fault at the outset – the insurance covers it, much like compulsory motor insurance covers accidents caused by human drivers. Notably, if the human in the vehicle was partly responsible (e.g. failing to hand over control when prompted by the vehicle, or if they made modifications to the vehicle), the compensation can be reduced or denied similarly to contributory negligence (AEVA 2018, s.3).

Building on this, the UK government undertook a comprehensive review through the Law Commission of England and Wales and the Scottish Law Commission, culminating in recommendations for a new regulatory framework for automated vehicles in January 2022. The result was the Automated Vehicles Act 2024 (AVA 2024) – referred to in the essay question as the “Automated Vehicles Act 2023” since it was introduced as a Bill in late 2023 and received Royal Assent in 2024.

The Automated Vehicles Act 2024 creates a legal framework for the safe deployment of self-driving vehicles in Great Britain. One of its significant aspects is clarifying liability and the role of humans when a car is driving itself. The Act introduces the concept of an “authorised automated vehicle” (a vehicle that has passed a safety test and is approved for self-driving) and distinguishes between vehicles with a “user-in-charge” and those without. For authorised vehicles that still require a human user to be ready to take over in some situations (User-in-Charge vehicles), the human is not to be held liable for incidents that occur while the car is driving itself. In fact, the AVA 2024 shifts criminal liability for driving offences away from the occupant to the vehicle’s system and its licensed operator when the automated mode is engaged. In other words, if a self-driving car runs a red light or speeds while in autonomous mode, the human passenger would not be prosecuted; instead, the company responsible for the automated driving system (the licensed operator or “Authorised Self-Driving Entity”) would face regulatory consequences.

On the civil side, AVA 2024 builds on the insurance-based approach of AEVA. Victims of crashes involving authorised self-driving cars will continue to be compensated by the insurer. The difference now is that the law explicitly acknowledges that the human user might be immune from blame if the car was truly self-driving.

The focus then shifts to the vehicle’s manufacturer or the entity that obtained the authorisation (often the developer of the self-driving system). If the cause of the accident was a defect in the car or its software, a product liability claim under the CPA 1987 could be brought by the insurer (via subrogation) or directly by victims. If it was due to some negligence in the upkeep or deployment of the system by the operator, negligence claims could arise. The AVA 2024, however, aims to avoid needing to pin fault on the user or the technology company in every instance by making insurance primary – thereby ensuring a quick route to compensation.

A practical example: imagine an authorised automated vehicle in self-driving mode fails to detect a cyclist and causes an accident. The cyclist can sue the car’s insurer under AEVA/AVA and be compensated. The insurer might then investigate the cause. If it turns out the vehicle’s sensor software was defectively designed (an issue attributable to the manufacturer), the insurer could pursue the manufacturer under product liability to recover the payout. If, alternatively, the cause was that the human user had ignored the manufacturer’s instruction to service the sensors or had overridden the system incorrectly, the insurer might argue the user was partially at fault (reducing the claim or seeking contribution). Under AVA 2024, if the vehicle was truly autonomous at the time, the user’s obligations would be minimal, placing most responsibility on the manufacturer and the vehicle’s automated system overseers.

The introduction of these AV-specific laws highlights a tailored approach: for motor vehicles, a combination of compulsory insurance and clear allocation of legal responsibility (manufacturers and system operators taking on more liability, users less) is used to manage AI harm. It reflects the high stakes of road accidents and the existing framework of motor insurance. Notably, AVA 2024 also introduces new criminal offences, such as making it illegal to market a vehicle as self-driving if it was not truly an authorised AV, and establishes investigative mechanisms for AV incidents on a “no-blame” basis to improve safety. These measures reinforce that with AI, responsibility also includes proactive regulation and oversight, not just after-the-fact liability.

It is worth comparing this to other AI contexts which lack specific statutes. For example, if an AI-powered medical device malfunctions and injures a patient, there is no special “AI injury act” for healthcare – one must rely on general product liability or negligence. The autonomous vehicles regime could serve as a model for other areas if needed. Some have proposed, for instance, mandatory insurance or compensation funds for damages caused by certain AI (such as drones or industrial robots), analogous to the motor insurance scheme. As of now, however, the only dedicated AI liability legislation in the UK is for vehicles. In Scotland and Northern Ireland, the same Acts (AEVA 2018 and the forthcoming AVA 2024) apply or will be applied with any necessary local adjustments (road traffic law is devolved in Northern Ireland, so similar legislation would be needed there).

Other Proposed Reforms and Comparative Perspectives

While English law currently addresses AI harm through the above mechanisms, there are ongoing discussions about law reform to better accommodate AI. The Law Commission’s aborted 14th Programme of Law Reform in 2021 had earmarked “product liability and emerging technology” as a potential project, noting that the CPA 1987 was “not designed to accommodate software and related technological developments such as 3D printing or machines that ‘learn’”. The pause of this project means no immediate changes, but it acknowledges a perceived gap. The Commission recognised that questions like whether AI software is a product, or how to balance innovation and consumer safety in AI, may need bespoke consideration in the future.

At the European level, although the UK is no longer bound by EU law, developments are influential. The European Commission in late 2022 proposed updates to the Product Liability Directive and a new AI Liability Directive to address the digital age. The proposals include clarifying that software (including AI systems) can be products, easing the burden of proof for claimants in complex cases (for instance, allowing courts to presume a product was defective or a developer at fault if an AI caused harm and the plaintiff cannot access evidence to prove exactly why), and possibly introducing rules to ensure manufacturers disclose information about high-risk AI systems. The EU draft also suggests removing the state-of-the-art defence for certain products, meaning AI producers could be liable for unknown risks in some cases. While these changes do not directly affect UK law, UK businesses that sell into Europe might end up adhering to them, and the UK government may take note of the direction of travel when considering its own stance.

The UK Government’s approach, as evidenced by the AI Regulation White Paper 2023, has been to favour a light-touch, principles-based regulatory framework for AI, relying on sectoral regulators to manage AI risks rather than broad new laws. This means in the short term we may not see a sweeping “AI liability act” in the UK. Instead, adaptation will likely occur incrementally: through cases in which courts interpret existing tort law in AI contexts, and through targeted legislation once specific needs become clear. One risk of the case-by-case approach is that uncertainties (like the software-as-product issue) persist until litigated or reformed.

In the meantime, scholars and policy analysts have floated various solutions to ensure AI accountability. One idea that gained attention in the past was to grant sophisticated AI systems a form of legal personhood (“electronic persons”) so they could bear liability themselves. The European Parliament mentioned this in a 2017 resolution, but it was met with criticism and has been largely set aside in favour of assigning responsibility to human actors (developers, deployers, etc.). Other proposals include establishing compulsory insurance for operators of AI (similar to car insurance, but for, say, operators of delivery drones or robots) and no-fault compensation funds for victims of AI accidents. The rationale is that if pinpointing fault is too difficult, society might treat AI harm like injuries in industrial accidents or vaccines – handle compensation without fault and spread the cost through insurance or levies on the industry. For example, New Zealand’s accident compensation scheme covers injuries without needing to prove fault (though not specific to AI). Such approaches, however, would mark a significant shift in the tort system and have not been adopted in the UK so far.

Another significant conversation is how to prevent what some have called “liability gaps” or the scenario of users being unfairly burdened. A recent study warned that clinicians risk becoming “liability sinks” – absorbing all legal responsibility for AI-influenced decisions, even when the AI system itself may be flawed. In the healthcare context, this could make doctors reluctant to use AI decision-support tools, fearing they will be blamed for errors the AI makes.

To address this, commentators have suggested adjusting product liability to ensure AI developers remain accountable for flawed recommendations. For instance, a 2024 white paper by UK researchers recommended reforming product liability for AI tools before allowing them to make independent recommendations in healthcare. If an AI is effectively making a decision that affects a patient, perhaps the AI should be treated as a product with strict liability for its maker, rather than expecting the doctor to shoulder the risk. Similarly, in an autonomous car, laws like AVA 2024 explicitly aim to ensure the passenger is not liable when the AI is in control, reinforcing that liability should lie with those who created or control the technology.

Finally, it is important to note that traditional tort concepts like vicarious liability will also find application in AI scenarios. If an employee uses an AI tool and through that causes harm, the employer can be vicariously liable as with any employee act. This covers situations like a logistics company whose employee operates an AI-guided forklift negligently, injuring someone – the company is liable for the employee’s negligence. In a more attenuated scenario, if the “actor” is an AI itself, one cannot be vicariously liable for a machine in current law; instead, one would likely find a direct duty on the company deploying the AI (for instance, negligence in training or supervising the AI’s operation). So far, courts have not needed to stretch vicarious liability doctrine to cover AI because they can attribute fault to a human somewhere in the chain. But as AI becomes more autonomous, it underscores the broader need to ensure there are always identifiable human or corporate defendants for those harmed.

Real-World Examples of AI-Driven Harm

To ground the discussion, it is helpful to consider real incidents where AI systems have caused harm, and how the law would likely assign responsibility under the above principles:

  • Autonomous Vehicle Fatality (Uber, 2018): The first recorded pedestrian death by a self-driving car occurred in Tempe, Arizona in 2018 when an Uber test vehicle in autonomous mode struck a woman crossing the road at night. Investigations revealed the car’s AI software failed to properly identify the pedestrian and did not brake, and the human safety driver was not paying attention. In a UK scenario, if this were an “authorised” self-driving car under AVA 2024, the victim’s family could claim against the insurer of the vehicle. The insurer could then seek recovery from those at fault. Uber (as the developer of the self-driving system) might be held liable under product liability if the system’s failure was due to a defect in design. Evidence that the AI was inadequately programmed to handle jaywalking pedestrians would support a defect claim or a negligence claim for faulty design/testing. The safety driver’s negligence (not intervening) was the immediate cause as well. Likely, liability would be shared: the operator’s negligence and the product’s defect both contributing. Uber as a company could also be vicariously liable for the safety driver’s fault (since she was an Uber employee during testing). In the actual US case, Uber avoided criminal charges; only the safety driver was prosecuted. Civilly, Uber quickly settled with the victim’s family. This example shows multiple layers of responsibility – the human overseer, the developer of the AI, and the company deploying the AI – each potentially liable. English law would use a combination of negligence (for the driver’s inattention and any corporate negligence in training or supervision) and product liability (for the software flaw) to apportion blame. Importantly, future fully autonomous vehicles aim to remove the need for a human overseer; at that point, liability would fall squarely on the manufacturer if the AI fails, which is why the law is shifting to anticipate that by making the manufacturer or “authorised self-driving entity” the focal point.
  • Tesla Autopilot Crashes: Tesla’s “Autopilot” driver-assistance system, while not full self-driving, has been involved in several high-profile crashes (some fatal). Typically, these incidents involve the system failing to detect an obstacle (like a crossing truck or a highway barrier) and the human driver not taking over in time. Tesla warns users that Autopilot is not autonomous and that they must remain attentive. If such a case came to an English court, responsibility would hinge on the specifics. The driver could be found negligent if they relied on Autopilot beyond its intended use (for example, a driver who was watching a film when they should have been supervising the car). Tesla could face product liability claims if it’s shown that the Autopilot system had a safety defect – for instance, an inability to recognise certain hazards that is below what drivers are entitled to expect of a car touted as having advanced collision avoidance. One might argue that if a reasonable consumer is told the car can “drive itself” in certain scenarios, they expect it not to plough into obvious obstacles. Tesla would likely defend itself by pointing to its explicit warnings and by arguing that no full self-driving capability was advertised (especially without regulatory approval). As of now, most legal actions from such crashes (mainly in the U.S.) have focused on driver error, but there are ongoing product liability lawsuits alleging design defects in the system. In an English context, a court might well find shared liability: the driver for not paying attention (breaching their duty as a user) and the manufacturer if the design did not meet expected safety standards. The outcome would depend on technical evidence about the system’s capabilities and whether the accident was due to a random failure or an inherent limitation that should have been remedied.
  • AI Diagnostic Error: Imagine a hospital uses an AI diagnostic system for interpreting patient scans. A glitch causes it to miss signs of cancer, and a patient’s diagnosis is delayed, resulting in harm. Who is liable? The patient could sue the hospital (and the responsible doctor) for clinical negligence – arguing that a competent clinician would not rely solely on the AI or would have caught the error. The hospital might be held liable if its staff failed to double-check the AI’s output or if adopting the AI without proper validation was itself a breach of duty. The hospital or doctor might then seek contribution from the AI’s manufacturer. If the AI system is considered a medical device (as many diagnostic AI tools are, under medical device regulations), the manufacturer would owe a duty of care and could also be strictly liable under the CPA if the software was defective. Was the misdiagnosis due to a defect (like a training bias making the AI systematically under-detect certain cancers)? If yes, the manufacturer as producer of the AI device could be liable for the injury (the worsened medical condition). However, proving a software defect is tricky without access to the algorithm’s inner workings. The manufacturer might defend on the basis that no one could have discovered the issue (invoking the development risks defence) or that the product documentation advised using the AI as an aid, not a sole decision-maker (shifting blame to the hospital for over-reliance). The concern, as noted earlier, is that the end-user clinician becomes the “liability sink” – the easiest target for legal action – while the AI provider might escape liability due to contract disclaimers or the burden of proving a defect. Addressing this likely requires clearer routes to hold AI providers accountable (e.g. treating such software as products and possibly adjusting how courts handle proof of defect or causation in these cases). In the UK, regulators and professional bodies may also issue guidance: for instance, doctors are advised that using AI does not absolve them of responsibility for the care decision, which in practice means liability may currently rest with the clinician unless product liability can be established against the AI manufacturer.
  • Industrial Robot Accident: Consider an AI-powered robotic arm in a factory that unexpectedly swings and injures a maintenance technician. If the robot malfunctioned due to a defect in its control software, the manufacturer of the robot would be a prime candidate for strict liability under CPA 1987. The technician (or their employer’s insurer, via subrogation if workers’ compensation applies) could claim the robot was defective since it behaved in an unsafe way. The manufacturer might counter that the robot was tampered with or not maintained as instructed, potentially shifting fault to the factory. If the factory had disabled a safety feature or failed to train the technician about a known risk, the factory could indeed be found negligent. In one real incident from Germany in 2015, a factory robot killed a worker; it was attributed to human error in installation rather than a spontaneous AI decision, but it underscored how both design and usage are factors. Under English law, an injured employee could sue both the manufacturer (product liability) and the employer (negligence under employer’s duties to provide safe equipment). The employer has a non-delegable duty to ensure machinery is safe, so even if the robot was inherently unsafe, the employer would be liable to the worker and then could seek recourse from the manufacturer. This scenario again shows multiple layers of accountability – reflecting that with AI, as with any complex system, responsibility may be shared among the creator, the user, and others, depending on who contributed to the harm.

Conclusion

The advent of AI does not find the law of tort completely unprepared – core principles of negligence and product liability in English law are capable of addressing many scenarios of AI-driven harm. Developers and manufacturers of AI systems bear responsibility under well-established rules: they must foresee and prevent risks where possible, and if their product is defective and unsafe, they can be held strictly liable for resulting damage. Users and operators of AI are expected to act responsibly as well, though the more autonomous the AI, the less the law will attribute fault to a human user who had no effective control. The Consumer Protection Act 1987 provides a powerful tool to hold AI producers accountable without proving fault, bridging some evidentiary difficulties inherent in complex technology. Negligence law complements this by covering situations the CPA might not (such as AI provided as a service, or harms occurring outside the CPA’s scope or time limits), ensuring that if someone in the chain was careless, they can be held to account.

However, AI does strain the traditional frameworks. Challenges include: identifying when software is a “product” in law; the difficulty for claimants of proving a defect or a developer’s negligence inside an opaque machine-learning system; and the possibility that an AI’s harm was not truly anyone’s fault in the conventional sense. Real-world incidents like autonomous vehicle crashes and AI-related medical errors reveal a risk of gaps or unfair outcomes – for example, an innocent user might be blamed for an AI’s mistake, or conversely a victim might go uncompensated because they can’t prove any human was negligent. English law is beginning to adapt, as seen with the new Automated Vehicles Act 2024 that reallocates liabilities in the context of self-driving cars to ensure victims are protected and responsibility rests with those in control of the technology.

Looking forward, we may see further evolution. It could come through judicial decisions – courts incrementally extending principles to new situations – or through targeted legislation once specific needs become apparent. Policymakers will need to balance innovation and accountability, echoing the CPA’s original aim of balancing consumer protection with safeguarding innovation. This balance was cited in recent case law (e.g. Wilkes and Gee) to avoid an over-expansive view of “defect” that might unduly hinder beneficial products. The same caution will likely guide AI liability: the law must provide remedies for those harmed by AI while not imposing impossible burdens on those developing it.

In conclusion, the current combination of negligence and product liability law in England, supplemented by the first generation of AI-specific statutes like AVA 2024, provides a workable, if imperfect, framework for AI-driven harm. Developers and manufacturers are principally on the hook if their AI causes injury – either for negligence in its creation or under strict liability for defects – and users may be secondarily liable if their own actions contribute. There remain grey areas and the potential for hard cases that test the limits of existing doctrines. As AI continues to advance, the legal system will no doubt continue to refine who bears the cost of AI’s risks. The goal will be to ensure that those who benefit from and control AI technology also bear appropriate responsibility for any harm it causes, thus aligning with the enduring tort law policy of incentivising safety and providing recourse to the injured.

References

  • Automated and Electric Vehicles Act 2018, c. 18.
  • Automated Vehicles Act 2024, c. 10 (Great Britain).
  • Consumer Protection Act 1987, c. 43.
  • Donoghue v Stevenson [1932] AC 562 (HL).
  • Caparo Industries plc v Dickman [1990] 2 AC 605 (HL).
  • A v National Blood Authority [2001] 3 All ER 289.
  • Wilkes v DePuy International Ltd [2016] EWHC 3096 (QB).
  • Gee v DePuy International Ltd [2018] EWHC 1208 (QB).
  • Law Commission (2022) Automated Vehicles: Joint Report, Law Com No. 404.
  • Department for Transport (2023) Automated Vehicles Bill: Policy Paper, UK Gov (21 Nov 2023).
  • Grolman, L. (2019) ‘Autonomous Vehicles, Software and Product Liability: Have the Law Commissions Missed an Opportunity?’, Oxford Business Law Blog, 22 Oct 2019.
  • Tettenborn, A. (2023) ‘Artificial intelligence and civil liability – do we need a new regime?’, International Journal of Law and Information Technology, 30(4), pp.385–405.
  • Kennedys Law LLP (2023) ‘The Consumer Protection Act 1987: staying put… for now’, Kennedys Insights, Oct 2023.
  • Lawton, T., Morgan, P., & Porter, Z. (2024) ‘Clinicians risk becoming “liability sinks” for artificial intelligence’, Future Healthcare Journal, 11(1): 100007.
  • University of York (2025) ‘Perceived “burden” of AI greatest threat to uptake in healthcare, study shows’, press release, 10 Jan 2025.

Article by LawTeacher.com