Tech companies developing chatbots and voice agents may be exposing themselves to liability under the California Invasion of Privacy Act (CIPA) and the Wiretap Act.

Data privacy litigation increased significantly from 2023 to 2024, fueled by rising concerns over pixel tracking and session replay tools. In fact, wiretap claims under CIPA have become one of the fastest-growing areas of privacy law, with 83% of all cases brought under California Penal Code § 631(a).

However, plaintiffs are also bringing claims against another class of technology: chatbots and voice assistants, alleging that these tools intercept and record user communications without consent. For plaintiffs’ attorneys focused on privacy enforcement, recent cases offer a window into how courts are treating claims tied to undisclosed data collection, passive monitoring, and third-party involvement in automated communication tools.

This article reviews three class actions that demonstrate the role of conversational technology in privacy litigation: one involving a chatbot designed to mimic human conversation, another addressing the privacy implications of passive audio capture by consumer-facing voice assistants, and a third exploring how enterprise-grade AI systems used in customer service and business communications may expose companies to liability under privacy laws.  

Each one raises legal questions about consent, disclosure, and surveillance, and offers key takeaways for plaintiff-side litigators evaluating similar claims.

Deceptive Design and Hidden Surveillance: Valenzuela v. Nationwide Mutual Insurance Co.

In Valenzuela v. Nationwide Mutual Insurance Co., the US District Court for the Central District of California allowed claims under Section 631 of CIPA (the Wiretap provision) to proceed in a case involving embedded chatbot code on Nationwide’s website. 

The plaintiff, Sonya Valenzuela, alleged that her real-time chat communications were intercepted while in transit by a third-party service, Akamai, which Nationwide employed to provide customer engagement tools. The court held that Valenzuela had plausibly alleged that Akamai willfully intercepted her communications without consent, and that Nationwide could be held liable for aiding and abetting the CIPA 631 violation by embedding Akamai’s code and benefiting from its data practices. 

The court dismissed the plaintiff’s other CIPA 632.7 claim, on other grounds, with leave to amend. In August 2024, the parties stipulated to a dismissal with prejudice of the plaintiff’s individual claims.

Key Takeaway

The court’s ruling allowing the Section 631 claim to proceed indicates that message interception by third-party vendors, even those acting on behalf of a company, could potentially constitute a CIPA violation if done without user consent. It also indicates that companies may be held liable for aiding such conduct, broadening potential avenues for CIPA litigation. 

Attorneys evaluating chatbot and web surveillance cases should closely examine the relationships between site operators and third-party technology providers, especially when those tools are capable of capturing and analyzing communications in transit.

Passive Listening and Large-Scale Exposure: Apple’s $95 Million Settlement

The privacy risks associated with voice interfaces first gained national attention in late 2024, when Apple bypassed litigation and agreed to a $95 million settlement over claims that its Siri voice assistant recorded users without consent. 

Plaintiffs alleged that Siri was inadvertently activated by background noise, capturing private conversations, including discussions with doctors, financial information, and personal matters, which were reviewed by human contractors for product improvement and targeted advertising. Plaintiffs alleged these actions constituted violations of the Electronic Communication Privacy Act, CIPA and the federal Wiretap Act. 

The case highlights two recurring risk factors associated with this technology: (1) always-on or passively triggered devices, and (2) repurposing of communications for AI training or analysis without explicit permission.

The scale of the settlement, paired with Apple’s decision to resolve the matter despite denying liability, reinforces the magnitude of exposure tech companies face when user consent is absent or unclear.

Key Takeaway

When AI systems capture audio passively or trigger recordings without clear user awareness, plaintiffs may have a strong basis for a privacy claim. Attorneys should examine whether any such system includes real-time disclosures, whether users have any opportunity to consent, and whether human review or AI training relies on the collected data.

Shifting Standards: How Ambriz v. Google Redefines Voice AI Privacy Risk

On February 10, 2025, the Northern District of California adopted a broader interpretation of CIPA in Ambriz v. Google LLC, denying Google’s motion to dismiss. The case focuses on Google’s AI-powered voice tools, including Contact Center AI and Google Assistant, which were allegedly used by businesses to record customer calls without proper consent, raising questions about Google's liability as the technology provider.

A key issue was whether Google could be held liable under CIPA even without any actual allegations of data misuse. The court held that misuse wasn’t necessary; it was enough that Google had the technical capacity to access and exploit the communications. This marks a shift from earlier chatbot-related rulings, which focused on how data was used rather than what systems were capable of doing.

While the complaint does not directly address CIPA’s party exception, it frames Google as a third-party service provider that unlawfully intercepted communications. The plaintiffs argue that Google's role in recording and analyzing conversations via Contact Center AI violated California law, even though it did not directly participate in the conversations. This raises broader legal questions about how privacy laws apply to AI systems that facilitate or monitor human communications behind the scenes.

The decision reflects a growing judicial recognition of AI’s role in privacy law and indicates increased legal risk for companies deploying voice-based AI tools, particularly when consent practices are unclear or insufficiently disclosed.

Key Takeaway

Voice AI systems that retain or process communications, even without explicit data misuse, may still trigger CIPA liability. Attorneys should focus on the system’s technical capacity to intercept and repurpose data, regardless of whether the company ultimately uses it. The scope of what the tool can do matters as much as what it actually does.

What This Means for Privacy Enforcement

The evolving body of case law around chatbots and voice assistants signals a critical shift in how courts interpret privacy statutes like CIPA and the Wiretap Act. As the line between human and machine communication continues to blur, companies deploying conversational AI face increasing exposure; not just for how data is ultimately used, but for how and when it is collected, and whether users are meaningfully informed. 

These developments offer expanding opportunities to challenge the design and deployment of automated communication tools, especially when consent is ambiguous or absent. Even passive listening or third-party facilitation can serve as the basis for liability, making it more important than ever to examine the inner workings of these technologies and hold the company creating them accountable when necessary.

Katrina Carroll is a founding partner of the Chicago-based firm Carroll Shamberg and a nationally recognized plaintiff’s attorney known for her work in complex class action litigation. She specializes in consumer fraud, data privacy, and product liability.

Partner With Darrow to Grow Your Practice