Perplexity, one of the fastest-growing AI search tools, is facing intense scrutiny over how it handles user data. The issue is not centered on accuracy or hallucinations. Instead, the focus has shifted to privacy and what may actually happen behind the scenes when users believe their chats are protected.

A newly filed class-action lawsuit has brought those concerns into sharp focus. The complaint argues that Perplexity’s incognito mode may not provide the kind of privacy users would reasonably expect. That claim alone is enough to make many people pause, especially because AI chats often involve personal, sensitive, or highly specific information.

What the lawsuit says about incognito mode

The complaint, filed by an anonymous user identified as John Doe, alleges that Perplexity’s incognito mode is a “sham.” In practical terms, the accusation is that the feature does not meaningfully shield user data, even though people may choose it specifically to reduce tracking and data collection.

That matters because incognito mode carries a clear expectation. Users generally turn it on because they want a more private experience. If that expectation does not match what actually happens, then the issue is not just technical. It becomes a trust problem.

Allegations of Data Sharing With Third Parties

The lawsuit claims that user conversations were shared with third parties such as Google and Meta. According to the allegations, this data sharing may have happened even when users had explicitly selected incognito mode.

That detail makes the case especially unsettling. A user who intentionally activates a privacy-focused setting is signaling a preference not to be tracked in the usual way. If data was still transmitted under those conditions, the feature may have created a false sense of security.

Types of conversations named in the complaint

The complaint points to conversations involving potentially sensitive subjects, including:

  • Financial advice
  • Health concerns
  • Legal queries

These are exactly the kinds of topics people may ask about when they want discretion. AI tools can feel direct, private, and conversational, which often lowers people’s guard. And that sense of intimacy can lead to oversharing without much hesitation.

What User Data May Have Been Exposed

The allegations are not limited to the fact that data may have been shared. They also describe the kinds of information that may have been involved.

Reports referenced in the dispute suggest that the following data may have been passed along for ad targeting purposes:

  • IP addresses
  • Email IDs
  • Geolocation data
  • Full chat transcripts

That combination is what makes the situation feel especially serious. It is one thing for a platform to collect limited technical data. It is another if full conversations, alongside identifying or location-related information, are part of the flow.

Claims about tracking tools and ad targeting

The lawsuit also accuses Perplexity of embedding tracking tools similar to those used in online advertising. The complaint says users were not clearly informed about this behavior.

That lack of clarity sits at the center of the controversy. Privacy issues often come down to whether users understood what they were agreeing to. If tracking mechanisms were active in ways people did not expect, the gap between perception and reality becomes hard to ignore.

One of the more troubling claims is that, in some instances, entire conversations could be accessed through publicly reachable links.

Even without adding anything beyond what has been alleged, that claim stands out. A private AI exchange is usually treated like a contained interaction between user and platform. The suggestion that whole chats may have been reachable through public links raises obvious concerns about exposure and control.

Why having full chat access is important

When a conversation includes questions about money, health, or legal matters, exposure is not a minor inconvenience. The chat itself can reveal intent, circumstances, concerns, and private lines of thinking. If access to that material was broader than users understood, the privacy risk becomes much more personal.

Why the Perplexity Privacy Lawsuit Matters to Everyone, Not Just One App

This lawsuit is not just about one feature on one platform. It speaks to a broader issue with AI tools: they feel personal, and that feeling changes user behavior.

People tend to type things into AI systems that they might never post publicly. The interaction feels private, immediate, and low-pressure. That can make it easy to share more than intended. When a platform appears conversational, users often treat it with a level of trust that may not be fully earned.

AI trust and transparency are now part of the story

The complaint also claims that years of chats were shared with ad giants and that Perplexity does not clearly surface its privacy policy in the way rivals do.

If those allegations prove meaningful, the consequences could stretch well beyond Perplexity. The case could push AI companies toward stricter transparency around data handling, tracking practices, and privacy controls. At the very least, it puts pressure on AI platforms to make their privacy promises easier to understand and harder to misread.

The Core Privacy Question Users Are Left With

At the heart of the dispute is a simple question: when users choose a setting labeled as private or incognito, what should they reasonably expect?

That question matters because privacy features are not just product options. They shape behavior. Users rely on them when deciding how much to reveal, what topics to explore, and whether a conversation is safe enough to continue. If the label suggests one thing while the underlying behavior suggests another, trust erodes fast.