AI as a Product: The Next Frontier in Product Liability Law
The Proposed AI LEAD Act Will Reshape Product Liability for a Safer Future
AI transformed from a niche technology into a ubiquitous, often-cloying presence in our daily lives. From chatbots offering mental health support to autonomous vehicles navigating city streets, AI systems increasingly make decisions that affect human well-being. As these systems become forcibly fused into consumer products and services, a critical legal question has emerged: Should AI be treated as a product under product liability law? Recent legislative efforts, particularly the bipartisan AI LEAD Act introduced by Senators Dick Durbin (D-IL) and Josh Hawley (R-MO), signal a paradigm shift in how the law may soon address this question.[1]
The Aligning Incentives for Leadership, Excellence, and Advancement in Development (AI LEAD) Act proposes a federal cause of action for product liability claims against AI system developers when their products cause harm.[2] The bill classifies AI systems as “products,” thereby subjecting them to the same legal scrutiny as physical goods like cars, toys, or pharmaceuticals.
Under the AI LEAD Act, lawsuits could be brought by individuals, class actions, state attorneys general, or the U.S. Attorney General.[3] Claims could be based on design defects, failure to warn, breach of express warranty, and unreasonably dangerous or defective products.[4] Importantly, the bill also holds AI deployers liable if they substantially modify or intentionally misuse an AI system.[5] This legislative move is a response to tragic incidents, including cases where minors took their own lives after interacting with AI chatbots.[6] Families have filed lawsuits alleging that these systems failed to provide adequate warnings or safeguards.[7]
Traditionally, product liability law applies to tangible goods.[8] However, AI systems, especially those embedded in consumer-facing applications, exhibit characteristics akin to products.[9] They are designed, manufactured, sold, and marketed with specific functionalities and promises.[10] AI products can cause harm due to defects in design, implementation, or deployment.[11] Applying product liability to AI encourages safer design and development practices.[12] By holding developers accountable for harm, the law incentivizes companies to prioritize safety over speed-to-market.[13]
Despite the logic behind treating AI as a product, several challenges complicate its application, including the “black box” problem, AI’s dynamic and evolving nature, and multiple actors in the supply chain.[14] First, AI systems, particularly those using deep learning, often operate as opaque “black boxes.”[15] Their decision-making processes are not easily interpretable, even by their creators.[16] This raises questions about causation and defectiveness. In terms of causation, there must be proof that a specific AI decision caused harm.[17] Turning to defectiveness, courts and scholars have debated whether defects should be defined by average performance, inherent risk, or user control, especially in autonomous systems.[18] Moreover, Gen AI products are notoriously unreliable, and the litany of hallucinations infecting court filings has been well documented.[19]
The dynamic and evolving nature of AI products underscores its volatility. Unlike static products, AI systems can evolve post-deployment. This continuous learning complicates the notion of a fixed design. A chatbot that was safe at launch may become harmful after ingesting new data.[20] Grok running amok earlier this year should be a clear warning sign of Gen AI’s unreliability.[21] Additionally, AI systems often involve a complex web of developers, data providers, deployers, and users.[22] Determining liability among these parties is difficult. The AI LEAD Act attempts to address this by extending liability to deployers who substantially alter or misuse AI systems.[23]
Recent court cases are beginning to treat AI systems as products under product liability principles. For example, in Garcia v. Character.AI, the court allowed a product liability claim to proceed against the developer of an AI chatbot that allegedly contributed to a minor’s suicide.[24] The court ruled that the chatbot app could be considered a product, and the plaintiff’s claims were based on design defects rather than expressive content.[25] Moreover, the AI LEAD Act specifically contains a finding that “multiple teenagers have tragically died after being exploited by an artificial intelligence chatbot.”[26] The Garcia court’s decision and the AI LEAD Act’s finding mark a significant shift in judicial thinking, suggesting that software, especially AI software, may be subject to product liability if it causes harm due to design flaws.[27]
While consumer advocates and legal experts have welcomed the AI LEAD Act, some in the tech industry express concern. Opponents argue that overregulation could stifle innovation, ambiguity in liability could deter startups, and existing laws (e.g., Section 230) already provide some protection.[28] However, proponents counter that the current legal vacuum allows companies to evade responsibility. One supporter of the AI LEAD Act stated that strong product liability laws encourage companies to prioritize safety during the design and development phases, not just when products malfunction or issues arise.[29]
If AI systems are legally classified as products, developers and businesses must prepare for increased scrutiny. As suggested by the AI LEAD Act and the statement from the Senate Judiciary Committee, key steps include the following:
- Conducting rigorous safety assessments pre- and post-deployment.
- Implementing transparent documentation of AI decision-making.
- Providing clear warnings and disclosures, especially for vulnerable populations.
- Avoiding contracts that attempt to waive liability in unreasonable ways.
Ultimately, the AI LEAD Act prohibits developers from entering into contracts that unreasonably limit liability or waive rights under the law.[30]
The classification of AI as a product under liability law marks a turning point in tech regulation. It reflects growing public concern over the safety of AI systems and a bipartisan push to ensure accountability. While challenges remain, especially around causation, evolving design, and multiparty responsibility, the AI LEAD Act provides a framework for addressing these issues. It aligns AI with long-standing consumer protection principles and sets the stage for safer, more responsible innovation. As AI continues to shape our lives, the law must evolve to ensure that when things go wrong, victims have a path to justice—and developers have a clear incentive to build systems that prioritize safety.[31]
[1] S. 2937, 119th Cong. § 1 et seq. (2025) [hereinafter AI Bill].
[2] Durbin, Hawley Introduce Bill Allowing Victims To Sue AI Companies, U.S. Senate Committee on the Judiciary (Sep. 29, 2025).
[3] AI Bill, supra note 1, § 301.
[4] Id. at § 101.
[5] Id. at § 102.
[6] See generally Garcia v. Character Technologies, Inc., 785 F.Supp.3d 1157 (M.D. Fla. 2025).
[7] Complaint at 1, Raine v. OpenAI, Inc., No. CGC-25-628528 (Cal. Super. Ct. Aug. 26, 2025) (alleging that the defendants' product, ChatGPT, contributed to the suicide of the plaintiff's 16-year-old child); Complaint at 3, E.S. v. Character Technologies, Inc., No. 25-cv-2906 (D. Colo. Sep. 15, 2025) (alleging abuse and exploitation of a 13-year-old child through the AI product Character AI); Complaint at 3,5, Montoya v. Character Technologies, Inc., No. 25-cv-2907 (D. Colo. Sep. 15, 2025) (alleging AI product Character AI led to a 13-year old child's severe mental health decline and eventual death); Complaint at 29, 33-34, P.J. v. Character Technologies, Inc., No. 25-cv-1296 (N.D.N.Y. Sep. 16, 2025) (alleging AI product Character AI led to a 14-year-old child's severe emotional distress, anxiety, depression, and a near-fatal act of self-harm).
[8] Product, Black’s Law Dictionary (12th ed. 2024).
[9] See Garcia, 785 F.Supp.3d at 1180 (concluding that an AI chatbot app was a product for the purposes of product liability claims when claims arise from defects in the app rather than ideas or expressions within the app).
[10] Catherine Sharkey, Products Liability for Artificial Intelligence, LawFare (Sep. 25, 2024).
[11] AI Bill, supra note 1, § 2.
[12] Id.
[13] Durbin, Hawley Introduce Bill Allowing Victims To Sue AI Companies, supra note 2.
[14] Patricia Alberts, Paul Calfo & Jean Gabat, Artificial Intelligence: The ‘Black Box’ of Product Liability, Husch Blackwell (Apr. 4, 2025).
[15] Id.
[16] Id.
[17] Gregory Smith et al., Liability for Harms from AI Systems, Rand (Nov. 24, 2024).
[18] Miriam C. Buiten, Product liability for defective AI, 57 Eur. J. L. Econ. 239, 253-56 (2024).
[19] AI Hallucination Cases, Damien Charlotin (reporting more than 280 cases (and growing) in the United States involving AI misuse, as of October 6, 2025).
[20] Carolina Citolino, Bridging the AI Regulatory Gap Through Product Liability, Regulatory Review (Sep. 4, 2025).
[21] Zeynap Tufekci, Musk’s Chatbot Started Spouting Nazi Propaganda. That’s Not the Scariest Part., N.Y. Times (July 11, 2025) (explaining that LLM training sets often include “the most vile elements of the internet”). See also Siladitya Ray, Musk Launches Grok 4 Amid Antisemitism Controversy—Claims It's ‘Smarter Than Almost All Graduate Students’, Forbes (July 10, 2025) (reporting that the Grok chatbot posted "antisemitic remarks and praise for Nazis and Adolf Hitler").
[22] The AI Technology Stack, Information Technology Industry Council 2-3 (Sep. 2025).
[23] AI Bill, supra note 1, § 102.
[24] Garcia, 785 F.Supp.3d at 1180.
[25] Id.
[26] AI Bill, supra note 1, § 2(2).
[27] Cf. Restatement (Third) of Torts: Prod. Liab. § 4(b) (A.L.I. 1995) (providing “[s]ervices, even when provided commercially, are not products”). However, in the reporters' note provided, in part, that "numerous commentators have discussed the issue and urged that software should be treated as a product." Additionally, legal scholars suggest that the evolving FDA model of post-market surveillance for adaptive, data-driven medical devices could inform broader AI regulation by combining pre-approval safety standards with post-market liability mechanisms. Sharkey, supra note 9.
[28] Contra Durbin, Hawley Introduce Bill Allowing Victims To Sue AI Companies, supra note 2 (providing responses to detractors of the AI LEAD Act)..
[29] Id. (citing support for the AI LEAD Act by Meetali Jain, Founder and Executive Director of the Tech Justice Law Project).
[30] Id.
[31] The AI LEAD Act would challenge recent executive fiats regarding AI policy, which favor unrestrained development by tech companies (and the billionaire bros running them). See Exec. Order No. 14,179, 90 Fed. Reg. 8741 (Jan. 23, 2025), Exec. Order No. 14,318, 90 Fed. Reg. 35385 (July 23, 2025), Exec. Order No. 14,319, 90 Fed. Reg. 35389 (July 23, 2025), Exec. Order No. 14,320, 90 Fed. Reg. 35393 (July 23, 2025).