Site icon Frontierbeat

Apple Nearly Axed Grok From the App Store—The Deepfakes Were That Bad

Apple iPhone displaying App Store interface with AI chatbot app listing and deepfake content warning notification

In January 2026, Apple did something it rarely does: it privately threatened to yank a major AI app off the App Store. The target was Grok, Elon Musk’s chatbot built by xAI, and the reason was straightforward—users had been weaponizing it to generate sexualized deepfakes of real people, including women and, according to multiple reports, images depicting children.

Apple sent a letter to Grok’s developers demanding a concrete plan to fix content moderation, according to documents obtained by NBC News. When xAI submitted an updated version, Apple rejected it. The company called the revision “out of compliance” and warned that the app could be removed entirely if further changes weren’t made.

The Apple letter, dated January 30, went to three Democratic senators—Ron Wyden, Ed Markey, and Ben Ray Luján—who had urged both Apple and Google to remove Grok and X from their app stores. Apple told the senators it had “contacted the teams behind both X and Grok after it received complaints and saw news coverage of the scandal.”

How Grok’s Deepfake Problem Escalated

The scale of misuse was jarring. A 22-year-old woman identified as Evie told The Independent she’d received more than 100 sexualized images of herself on X in less than a week—in one, she’d been digitally stripped naked. Since August 2025, researchers had documented roughly 100 instances of potential CSAM and non-consensual nude imagery tied to Grok, according to a report that tracked the fallout.

The problem wasn’t just the chatbot itself. Grok’s image editing capabilities let users take a normal photo and rewrite it into something explicit—bikinis, bunny costumes, you name it. Cybersecurity researchers told The Verge they’d successfully used the tool to generate sexual images of celebrities and political figures even after xAI claimed to have tightened controls.

xAI eventually announced in mid-January that Grok would stop editing “images of real people in revealing clothing such as bikinis.” The company geoblocked image generation in jurisdictions where such content is illegal and restricted the feature to paid subscribers. Apple reviewed the changes and finally approved a “substantially improved” version of the app.

A CNET report noted that Apple’s enforcement wasn’t limited to Grok—the company also scrutinized the X app, which integrates Grok’s technology. An updated version of X was approved, but only after the platform addressed similar concerns.

The incident has already rippled internationally. Indonesia temporarily banned Grok over sexualized images before restoring access. The episode mirrors broader tensions around AI content moderation—xAI is simultaneously suing Colorado to block the first state AI law, framing regulation as a First Amendment issue while racing to patch the guardrails its own product lacked.

Apple’s response stands out because the company almost never publicly disciplines apps this way. The January letter—obtained through a senator’s office rather than a press release—is the closest thing to a red card the App Store has dealt an AI chatbot. It also raises a question other platforms will face soon: when your AI tool can generate photorealistic images of anyone, who’s responsible when it inevitably goes sideways?

As of mid-April, Grok remains on the App Store with the revised restrictions. An NBC News review found that dozens of AI-generated sexual images of real women were still being posted on X in the past month—some depicting women edited into sports bras and other revealing outfits, suggesting the guardrails xAI implemented have meaningful gaps.

Exit mobile version