Site icon Frontierbeat

Anthropic Now Wants Your Passport to Use Claude—and Its Verification Partner Has a Leaky Past

Digital identity verification screen showing government ID scan and selfie capture prompt for AI chatbot access

Anthropic now wants to see your passport before it lets you use some of Claude’s features. The company quietly rolled out identity verification this week, requiring users to submit a government-issued photo ID and possibly a real-time selfie—measures it says are meant to “prevent abuse, enforce usage policies, and comply with legal obligations,” according to its support page.

The timing is awkward. Just days earlier, dozens of adult users reported on Reddit that Anthropic had incorrectly flagged them as under 18 and suspended their accounts. “Our team found signals that your account was used by a child,” the company’s email read. Users on the Pro Plan found their projects broken, their conversations reviewed, and their subscriptions refunded without consent.

That age verification system used Yoti, a separate provider. The new identity verification system uses Persona Identities—and that choice is raising eyebrows of its own. It’s also the latest in a string of aggressive moves from Anthropic, which has been turning down massive VC offers while simultaneously building what may be the most restricted consumer AI platform on the market.

Why Persona’s Track Record Is Raising Red Flags

Persona, Anthropic’s chosen verification partner, has a complicated recent history. According to KuCoin reporting, Persona previously exposed a platform interface linked to government surveillance, leading to the leakage of Discord users’ verification data. Discord publicly distanced itself from the company afterward.

A Mashable investigation found that personal data from LinkedIn users verified through Persona could be shared with up to 17 third-party companies. Despite this, Anthropic’s support page states it selected Persona “based on the strength of their technology, privacy controls, and security safeguards.”

The disconnect between that endorsement and Persona’s track record is hard to ignore—especially when the data being collected includes government IDs and biometric selfies. The move also follows Anthropic’s broader pattern of tightening access controls as its models grow more capable.

The Bigger Picture: AI Companies Tightening the Screws

Anthropic isn’t alone in tightening access controls. OpenAI has its own “Trusted Access for Cyber” framework. But requiring government ID for a consumer chatbot is a step most competitors haven’t taken. The move signals that Anthropic is treating Claude less like a tool and more like a regulated platform—one where knowing your user matters as much as knowing your customer.

For users, the tradeoff is stark: more security against abuse, in exchange for handing biometric data to a company with a documented history of mishandling it. Whether that tradeoff is worth it depends on how much you trust Anthropic’s judgment—and Persona’s infrastructure.

The company says verification data is only used to confirm identity and is not used for model training or advertising. It also says the process takes under five minutes. But for users already burned by the false-minor flagging incident, the message is clear: Anthropic is watching, and it may not always get it right.

Exit mobile version