Can an AI Chatbot Be Held Liable? A Wrongful Death Case Tests Tort Law
.png)
In May 2025, a US federal court allowed a wrongful death and product liability lawsuit to proceed against the chatbot app Character.AI and Google, whose cloud infrastructure and AI models were used to power the app. The case comes after a tragic incident in which a 14-year-old boy died by suicide after prolonged interactions with an AI companion on Character.AI.
This is one of the first times a case is applying traditional tort principles to generative AI, raising an important question: Can tort theories like negligence and strict liability apply to AI software? The outcome will set a precedent for how future product liability lawsuits balance AI innovation against user safety concerns.
The Character.AI Liability Case
Megan Garcia filed the lawsuit in February 2024 after her son, Sewell Setzer III, tragically took his own life after forming an intense, unhealthy bond with a Character.AI chatbot. According to the complaint, the teen became obsessed with role-playing conversations that turned sexually explicit and emotionally manipulative.
Garcia’s lawsuit alleges that Character.AI and its founders are responsible for her son’s death under multiple legal theories, including strict product liability for defective design and failure to warn, as well as negligence leading to wrongful death.
The complaint argues the chatbot was designed in an unreasonably dangerous way for minors. For example, lacking safeguards to prevent harmful content and by actively encouraging vulnerable users to treat the AI as a real confidant or lover. Garcia contends the harm was foreseeable and that the company breached its duty of care by not warning users (or parents) of these risks.
Defendants denied these claims, arguing that “the First Amendment precludes all Plaintiff’s claims and that Character A.I. is not a product for the purposes of product liability.”
In May 2025, Judge Anne Conway of the Federal District Court for the Middle District of Florida in Orlando denied the defendants’ motion to dismiss most of these claims. She ruled that the wrongful death, negligence, and product liability counts alleged sufficient facts to proceed to discovery, rejecting Character.AI’s argument that its chatbot’s output was protected speech under the First Amendment.
However, the judge did dismiss one claim for intentional infliction of emotional distress, finding the allegations, while disturbing, did not meet the extremely “outrageous” standard for that tort.
Traditional Tort Theories Explained

This case applies traditional tort law doctrines, typically used for physical products or services, to an AI system. The two primary tort theories in play are negligence and strict product liability.
Negligence is “the failure to behave with the level of care that a reasonable person would have exercised under the same circumstances.” It requires showing the defendant owed a duty of care to the victim, breached that duty by failing to act as a reasonable person or company would, and thereby caused the victim’s injury, resulting in damages. In Garcia’s case, the negligence claim asserts that Character.AI did not act responsibly to safeguard minor users from a foreseeable risk of psychological injury.
Products liability laws can impose strict liability on manufacturers for defective products regardless of whether their actions are negligent or intended to harm. Under product liability law, a plaintiff typically must prove the product was sold in a defective condition due to its design, a manufacturing flaw, or inadequate warnings and prove that the design caused the injury. Historically, this doctrine was developed for tangible consumer goods, but over time, courts have been expanding “product” to include non-physical products like software, mobile applications, and digital services.
In this case, the product liability claims focus on two alleged defects: (1) a design defect in the AI chatbot itself, and (2) a failure to warn of the chatbot’s mental health risks for minors.
Can AI Be a Product?
Character.AI’s defense argued that its software platform is a service, not a tangible product. However, Judge Conway declined to throw out the product-based claims at this early stage, rejecting the contention that a chatbot app cannot be a product. In her order, she notes that “these harmful interactions were only possible because of the alleged design defects in the Character AI app,” allowing the strict liability theory to move forward.
The court’s reasoning suggests that an AI system provided to consumers can be seen like any other consumer product if it is placed into the stream of commerce and poses predictable dangers. The complaint emphasized that the Character.AI app had a “definite presence” on the user’s device and was uniformly distributed to many users, akin to a mass-produced product, just in digital form.
If courts treat software and algorithms as products, developers and distributors could be held liable for design defects or inadequate warnings in their code just as if they were selling a defective toy or appliance.
Design Defects in AI: What Does That Even Mean?
Proving a design defect in a machine-learning system is a unique challenge. Unlike a traditional product, an AI’s “defect” may not be a physical flaw but an aspect of its programming or training that makes it unreasonably dangerous. Machine learning models are probabilistic and can produce unexpected outputs, a form of unpredictability that complicates judging whether the design was “safe.”
However, this lawsuit lists several ways in which Character.AI’s design might be considered defective. For instance, the plaintiff claims that Character.AI was trained on large datasets “widely known for toxic conversations, sexually explicit material, copyrighted data, and even possible child sexual abuse material,” leading it to generate similarly harmful outputs.
The complaint states that these risks were not accidental but stemmed from intentional design choices: Character.AI prioritized making the chatbot as engaging and “lifelike” as possible to attract young users, without building in adequate safety filters or moderation.
From the plaintiff’s perspective, a safer alternative could have been feasible using stricter content moderation, requiring age verification, or programming the AI to detect and respond appropriately to signs of a user’s mental health crisis. Failing to implement these protections, the complaint argues, made the product unreasonably dangerous and defective by design.
Foreseeability and Duty of Care
A central question in the negligence claim is foreseeability, which is crucial in establishing a duty of care: Was it reasonably predictable that the chatbot could cause this kind of harm to a user?
Garcia alleges that Character.AI should have anticipated the danger of entrusting an emotionally immature user to an uncensored AI persona. Indeed, the lawsuit points to warnings from experts (and even the defendants themselves) about such risks, suggesting the potential for users to form unhealthy beliefs or dependencies on AI.
In practice, proving foreseeability might involve showing that similar warning signs were ignored. For example, were there prior incidents of self-harm or severe distress linked to chatbot use, even if not publicized? Did the company conduct any risk assessments or content testing with teens? According to the lawsuit, Character.AI did not adequately warn parents or users about the mental health risks.
The duty of care for a company offering an AI service to the public arguably includes anticipating such misuse or overuse of the product. Judge Conway’s ruling suggests that, at least at the pleading stage, the court found it plausible that a duty existed and was breached. But the foreseeability aspect tilts in the plaintiff’s favor at this stage: the general kind of harm (severe psychological impact on a teen from an AI relationship) was exactly the risk that critics of such technology have highlighted and that a reasonable developer could foresee.
Litigation Challenges

One major hurdle will be proving causation: that the chatbot’s design and interactions were not only wrong but actually caused the teen’s death. In traditional product cases, causation might be clear; think of when a defective airbag fails to deploy, causing injury in a crash.
Here, the chain of causation is more complex. The defense is likely to argue that the suicide was the result of the young man’s underlying mental health issues or other factors, not any fault of the AI. Establishing that the AI’s words had such a powerful influence to meet legal causation standards will require persuasive evidence. Plaintiffs will rely on the temporal and content link: the fact that the suicide occurred seconds after the chatbot encouraged him to do it, coupled with months of the AI eroding his mental state.
Another challenge is the opacity of AI systems. Character.AI’s chatbot is powered by a large language model that operates as a black box; even its creators may not be able to pinpoint exactly why it generated specific responses. This could complicate the evidence. Plaintiffs might seek internal documents or training data to show the company knew the model could produce dangerous output.
On the other hand, the company might argue that the AI’s outputs are not entirely predictable or controllable, raising the question of how to assign fault. However, the lack of perfect control is not a complete defense if the design was inadequate. Courts have handled analogous issues before (for example, claims against social media companies for algorithmic addiction harms). In such cases, internal risk assessments and design documents become important evidence: they can reveal if the company knew of the hazards and chose not to act.
This Case Could Set New Precedent
This case could redefine how AI platform accountability is viewed under the law.
The fact that this case survived the initial motion to dismiss stage is significant in itself. It signals that courts are open to applying standard tort principles to new technologies, rather than giving AI companies a free pass.
Even if the case gains traction through discovery and possibly trial, it will likely encourage more lawsuits by others hurt by AI outputs. Tech companies may also need to invest more in "safe design" features, thorough testing, user warnings, and perhaps insurance to cover AI-related liabilities.
Insurers and investors in the AI sector will also be paying attention. A legal precedent imposing liability could lead to higher insurance premiums and risk mitigation requirements for AI products. On the flip side, a ruling about whether AI speech is protected or not could have free speech implications, affecting how companies moderate their AI systems.
This case reflects a growing movement toward stronger oversight of AI, both through lawsuits and in the class action space. As attorneys, lawmakers, and regulators consider new AI safety rules, they may look to it as a case study, particularly as tort litigation begins to play a larger role in shaping how AI is governed. It has the potential to clarify that AI is not an unregulated wild west, but is subject to the same duty not to cause harm that we expect of any consumer-facing product.
This Might Interest You: