Perplexity AI faces a proposed class action lawsuit alleging it shared user data with Meta and Google even when users were in Incognito mode, violating California privacy laws.
A proposed class action lawsuit has been filed against Perplexity AI, accusing the artificial intelligence company of violating California privacy laws by sharing users' personal data with Meta and Google even when users were in Incognito mode on the platform.
The lawsuit, filed in California federal court, alleges that Perplexity AI engaged in deceptive practices by claiming to protect user privacy while simultaneously transmitting user data to third-party companies. The complaint specifically targets the company's handling of data collected during Incognito browsing sessions, which users typically expect to provide enhanced privacy protections.
According to the filing, Perplexity AI's privacy practices allegedly contradict the assurances provided to users about data protection and confidentiality. The lawsuit claims that despite marketing its services as privacy-conscious, the company shared user information with major technology platforms including Meta Platforms Inc. (formerly Facebook) and Google.
The case raises significant questions about the effectiveness of Incognito mode protections and the transparency of AI companies regarding their data collection and sharing practices. California has some of the strongest privacy laws in the United States, including the California Consumer Privacy Act (CCPA), which gives residents specific rights regarding their personal information.
This legal action comes at a time when AI companies face increasing scrutiny over their data practices and privacy policies. The lawsuit could have broader implications for how AI platforms handle user data and what privacy guarantees they can legitimately offer to their customers.
The complaint seeks class action status, which would allow it to represent a group of users who may have been affected by the alleged privacy violations. If successful, the lawsuit could result in significant financial penalties for Perplexity AI and potentially force changes to how the company handles user data.
Perplexity AI, which has gained popularity as an AI-powered search and research tool, has not yet publicly responded to the allegations. The company's privacy practices and data handling procedures will likely face intense examination as the case proceeds through the legal system.
The lawsuit highlights the ongoing tension between AI companies' need for data to train and improve their models and users' expectations of privacy. As AI technology becomes more integrated into daily life, questions about data ownership, consent, and privacy protections continue to grow in importance.
Legal experts suggest that this case could set important precedents for how privacy laws apply to AI companies and what responsibilities they have to protect user data, particularly when users take steps like using Incognito mode to enhance their privacy.
The outcome of this lawsuit could influence not only Perplexity AI's business practices but also those of other AI companies operating in California and potentially across the United States. It may also prompt lawmakers to consider whether existing privacy laws adequately address the unique challenges posed by AI technology.
As the case develops, it will likely draw attention from privacy advocates, technology companies, and regulators who are all grappling with how to balance innovation in AI with the protection of user privacy rights.
This legal challenge represents another chapter in the broader debate about privacy in the digital age, particularly as it relates to AI systems that rely heavily on user data to function effectively.

Comments
Please log in or register to join the discussion