• 6% of women in public life surveyed by UN Women have been deepfake victims, and 45% of women journalists now self-censor online—up 50% since 2020.
  • AI tools can now generate non-consensual explicit imagery cheaply and at scale, with attacks described as “coordinated and deliberate.”
  • An Ohio man became the first person convicted under a new federal deepfakes law in April 2026, as enforcement begins to catch up to the technology.

The UN just dropped a brutal report on what’s happening to women in public life online. A new UN Women study of more than 1,500 women in public life found 6% have been deepfake victims, nearly a third received unsolicited sexual advances, and generative AI tools can now strip clothes from photos or simulate sexual assault without consent. The tech is cheap, accessible, and devastatingly effective.

“Artificial intelligence is making abuse easier and more damaging,” said Kalliopi Mingeirou, who leads UN Women’s efforts to end violence against women. “When women in general, or journalists and human rights defenders, are driven out from digital spaces, we all lose.”

The report, titled “Tipping Point: Online Violence — Impacts, Manifestations and Redress in the AI Age,” finds the self-censorship numbers staggering. 41% of all women respondents avoid posting on social media to dodge abuse. For women journalists specifically, 45% self-censor online in 2025—compared to 30% in 2020. That’s a 50% jump in five years.

The mental health toll is severe. Nearly a quarter (24.7%) of women journalists have been diagnosed with anxiety or depression connected to online violence. Almost 13% have PTSD diagnoses.

The Sophistication Gap

This isn’t just trolls in basements anymore. The attacks are “coordinated and deliberate,” designed to silence women while undermining their professional credibility. It’s part of a broader pushback against gender equality, amplified by technologies that profit from misogynistic hate speech.

Intersectionality matters. LGBTQ+ women, racialized women, and those from religious backgrounds face higher risk. “Being a woman makes you a target, but being a woman with intersectional identities makes you a bigger target,” the report notes.

Policy Response Is Finally Arriving

The legal landscape is shifting. In April 2026, an Ohio man became the first person convicted under a new federal deepfakes law for using AI-generated sexually explicit images to harass at least six women. The FTC is preparing “robust enforcement” against sexual deepfakes.

YouTube expanded its automated likeness detection to 4 million creators in its Partner Program, letting them request removal of AI-generated content using their likeness. Eighteen states now have laws protecting candidates from election deepfakes.

But the UN report warns these measures aren’t enough. The speed at which AI-generated content circulates, combined with anonymity and weak accountability, makes the abuse more dangerous than ever.

Women are fighting back. In 2025, women journalists were twice as likely (22%) to report incidents to police compared to 2020 (11%). Almost 14% are now pursuing legal action against perpetrators.

Meanwhile, AI-assisted harassment isn’t slowing. The Senate’s GUARD Act targeting AI companions for minors and Big Tech’s $600B AI buildout are both racing ahead—making the gap between AI capability and legal protection wider by the quarter. The industry’s own inconsistency on safety restrictions hasn’t helped.

In 2025, women journalists were twice as likely to report online violence to police than they were in 2020. The enforcement infrastructure is being built. Whether it scales fast enough is the open question.

Leave your vote