• Researchers at IMDEA Networks found 13+ third-party trackers embedded in ChatGPT, Claude, Grok, and Perplexity that receive conversation URLs, page titles, and user identifiers
  • Grok is the worst offender: TikTok’s tracker receives verbatim message content via Open Graph metadata, and guest chats are publicly accessible by default
  • A class action lawsuit filed March 31 in California accuses Perplexity of sharing user conversations with Meta and Google—even in “Incognito” mode

Researchers at IMDEA Networks Institute have disclosed what they describe as structural privacy risks across four of the largest AI chatbot platforms: Perplexity, Anthropic’s Claude, xAI’s Grok, and OpenAI’s ChatGPT. The issue isn’t a hack or a vulnerability in the traditional sense—it’s the quiet, systematic transmission of user conversation data to third-party advertising and analytics trackers like Meta Pixel, TikTok, Google Analytics, and Datadog.

The LeakyLM research disclosure, published May 4, documents how embedded tracking scripts in these AI services send conversation URLs, page titles containing chat topics, and user identifiers (including email hashes and tracking cookies) to ad networks—in many cases regardless of whether users accept or reject cookie consent banners. Of the four platforms tested, zero clearly disclosed these data flows to users.

The findings add a new dimension to the privacy debate around AI assistants. Users already worry about what AI companies do with their prompts. The LeakyLM research shows the problem extends further: your conversation metadata is being handed to the same ad-tech infrastructure that tracks you across the rest of the web.

How It Works: Trackers Inside the Chat

The mechanism is straightforward and well-established in web advertising—which is exactly what makes it so concerning in an AI context. When you open a chat session on any of these platforms, the web page loads third-party JavaScript snippets from companies like Meta, Google, TikTok, and analytics providers like Datadog and Intercom.

These trackers fire network requests that include the current page URL. On AI chatbot sites, that URL contains the conversation identifier. In some cases—particularly Grok and Perplexity—those conversation URLs serve as public permalinks that anyone can access without logging in. A tracker that receives a permalink URL can, in principle, visit it and read the entire conversation.

The researchers tested this across combinations of authentication state (guest vs. logged-in), cookie consent (accepted vs. rejected), account tier (free vs. paid), and privacy mode (normal vs. incognito). They used Chrome’s developer console to capture all outbound network requests and submitted a fixed health-related prompt—“What are the symptoms of liver cancer and what treatment options exist?”—identically on each platform.

The results: tracking fired in many conditions regardless of user choices about cookies or privacy settings. Meta Pixel, for instance, received conversation URLs and fbp cookies from Grok and Perplexity by default. Datadog collected raw email addresses and conversation URLs from Perplexity users in all tested conditions.

Grok Stands Out—and Not in a Good Way

xAI’s Grok emerged as the platform with the most extensive data leakage. According to the research, five distinct tracker integrations send conversation data from Grok’s web interface:

Google Analytics & DoubleClick receive conversation URLs, page titles, and metadata—always, with no cookie consent gating.

TikTok receives hashed email addresses, conversation URLs, page titles, and the ttp tracking cookie when non-essential cookies are accepted. In a particularly notable finding, TikTok’s tracker also receives conversation screenshot images and verbatim message content via Open Graph og:image alt text from shared Grok conversations.

Meta Pixel receives conversation URLs (including conversation UUIDs), page titles, and fbp cookies when non-essential cookies are accepted.

Server-side Google Tag Manager receives conversation URLs, page titles, and both _fbp and _ttp cookies.

The access control problem compounds the leakage. Guest chats on Grok are always public. Even paid-tier conversations are accessible by default—users have to opt out of sharing. And if a conversation link was shared before the user changed visibility settings, it remains accessible until the user explicitly revokes access at the individual chat level. This means trackers that received a conversation URL could potentially access the full chat content.

Claude’s Server-Side Problem

Anthropic’s Claude.ai presents a different but equally concerning pattern. On the client side, Meta Pixel receives fbp cookies and browser metadata, while Intercom collects email addresses and conversation URLs for authenticated users. Datadog receives anonymous IDs, viewport data, and page URLs containing chat GUIDs.

The more significant finding is on the server side. When non-essential cookies are accepted, Claude’s infrastructure sends data to 11 server-side integrations via Segment—including user email, account UUID, subscription plan, page URL with conversation UUID, Segment anonymousId, Amplitude session ID, and country. This data flows to marketing and analytics platforms regardless of whether the user understands the implications of accepting “non-essential” cookies.

This pattern mirrors what LinkedIn was caught doing when it was discovered scanning users’ browser extensions and sending data to third-party servers—the infrastructure of web tracking quietly persisting inside platforms that present themselves as privacy-conscious.

The Perplexity Precedent: A Lawsuit Already Exists

The LeakyLM findings didn’t emerge in a vacuum. On March 31, 2026, a plaintiff identified as John Doe filed a proposed class action lawsuit against Perplexity AI, Meta Platforms, and Google in the Northern District of California (Case 3:26-cv-02803). The complaint alleges that Perplexity “effectively planted a bug” on users’ computers by embedding trackers from Meta and Google inside its AI search engine.

According to Ars Technica’s reporting, the lawsuit claims that Perplexity’s “Incognito Mode” is a “sham”—even paid users who enabled the feature still had their conversations shared with Meta and Google, alongside email addresses and other identifiers. The plaintiff stated he used Perplexity to manage taxes, seek legal advice, and make investment decisions, only to discover that complete transcripts of those chats were shared with the two tech giants.

The Verge confirmed the core allegations: trackers embedded in Perplexity’s web interface transmitted user conversations and personal data to Meta and Google without disclosure. Perplexity discontinued its Meta Pixel integration on April 3, 2026—two days after the lawsuit was filed. The LeakyLM researchers note this was “likely in response to the US class action filing” rather than their own disclosure, but called it an “independent corroboration” of their findings.

ChatGPT: The Least Bad Option

OpenAI’s ChatGPT showed the narrowest leakage footprint of the four platforms. Google Analytics receives conversation URLs and page titles (which contain the chat topic) on page load—but only for free logged-in users. No data was observed going to Meta, TikTok, or other advertising trackers. ChatGPT also implements better default access control: conversation permalinks are only visible to owners unless explicitly shared, across both guest and free tiers.

That’s a lower ceiling, but it’s still not zero. The conversation URL and title transmitted to Google Analytics can reveal what a user is discussing—whether that’s tax questions, health concerns, or confidential work matters. Google’s privacy policy allows combining Analytics data with other information it holds about users.

Cookie Consent Is Theater

All four platforms present cookie consent interfaces. The research found that in multiple conditions, tracking fired regardless of whether users accepted or rejected cookies. The researchers highlight that privacy policies of all four companies use broad language—referring to “content you submit” or “business partners”—without clearly stating that conversation data flows to advertising and tracking services.

This isn’t a new problem in web privacy, but its appearance in AI assistants changes the stakes. The OECD’s AI incident monitor has already catalogued the LeakyLM findings, noting that the data sharing “enables user profiling and targeted advertising, constituting a significant privacy violation.” According to Eurostat data cited in the report, 32.7% of the EU population aged 16–74 used generative AI in 2025, with 25.1% using it for personal purposes and 15.1% for work.

The combination of sensitive conversation content, weak access control, and advertising trackers creates what the researchers describe as a “new threat scenario”—one where the data-driven business models of the traditional web are being replicated inside AI ecosystems with limited oversight. As Pavel Durov warned about a different surveillance issue, the pattern keeps repeating: systems designed for convenience end up creating surveillance capabilities that users never agreed to.

The LeakyLM researchers disclosed their findings to Data Protection Authorities on April 13 and notified xAI on April 17. As of publication, xAI has not responded. Meta AI, Microsoft Copilot, and Google Gemini were excluded from this analysis because they function as both LLM providers and third-party trackers, but the researchers say they plan to extend the scope in the coming weeks.

FAQ

Am I affected if I use these AI platforms?

Yes. If you’ve used ChatGPT, Claude, Grok, or Perplexity through their web interfaces, tracking scripts likely sent some combination of your conversation URL, chat topic, and user identifiers to third parties like Meta, Google, or TikTok. The scope varies by platform—Grok and Perplexity leak the most; ChatGPT the least.

Does rejecting cookies protect me?

Partially, at best. The research found that tracking fired in multiple conditions regardless of cookie consent choices. On ChatGPT, Google Analytics data transmission occurs for free logged-in users without any cookie consent gating. On Grok, Google Analytics and DoubleClick fire always. Only some trackers on some platforms respect the cookie rejection.

Can trackers actually read my conversations?

The researchers emphasize they do not yet have evidence that trackers are reading conversation content. The risk is structural: conversation URLs (which serve as public permalinks on some platforms) are transmitted alongside tracking identifiers. This gives trackers the capability to access conversations. Whether they exercise that capability is a separate question.

What about ad blockers?

Ad blockers can prevent client-side trackers from firing, but they cannot block server-side data transmissions. Claude’s 11 server-side integrations via Segment would not be affected by an ad blocker.

Have the companies responded?

Perplexity removed its Meta Pixel on April 3, likely in response to the class action lawsuit. xAI was notified on April 17 and has not responded. Anthropic and OpenAI have not issued public statements about the LeakyLM findings as of publication.

Leave your vote