Site icon Frontierbeat

1.5M AI-Linked CSAM Reports Hit NCMEC in 2025—And Investigators Are Drowning

Computer screen with grid of thumbnail images in law enforcement forensic analysis office, investigator scrolling through AI-generated content reports on NCMEC CyberTipline data

The National Center for Missing and Exploited Children received 1.5 million CyberTipline reports connected to generative AI and child sexual exploitation in 2025, according to the organization’s first look at its annual data. Those reports included more than 61.8 million images, videos, and other files related to suspected child sexual exploitation—part of a record 21.3 million total CyberTipline reports for the year.

The numbers are stark: over 7,000 reports involved users generating or possessing AI-created child sexual abuse material. More than 30,000 reports involved people attempting to create CSAM by uploading images and using text prompts. Another 145,000 reports flagged users employing AI tools to alter or manipulate existing abuse files without prompts. And 12,000 reports found CSAM embedded inside AI training data itself. Of the 1.5 million AI-related reports, 1.1 million came from Amazon AI Services and contained no actionable information—a figure that reveals how much noise is drowning the signal.

“I have no faith in the honor system when it comes to big tech removing harmful child sexual abuse material from their websites,” said California Assembly member Maggy Krell, a former prosecutor who helped take down Backpage. Krell and Assembly member Buffy Wicks introduced AB 1946, a bill that would make social media companies liable for failing to detect or remove CSAM on their platforms.

AI-Generated CSAM Overwhelms Investigators

The Internet Watch Foundation identified 8,029 AI-generated images and videos depicting realistic child sexual abuse in 2025—a 14% increase over the previous year, according to the charity’s March 2026 report titled Harm without limits. The IWF described offender conversations where criminals competed to create more lifelike and extreme scenarios. Some discussed setting up hidden cameras to film real children, then transforming that footage into AI sexual abuse video content. The UK charity found that 82% of British adults now support government regulation requiring AI systems to be safe by design, according to Savanta polling published alongside the report.

The scale problem is real. NCMEC’s 1.5 million AI-linked reports join 1.4 million reports of online enticement—including sextortion—in 2025, a 156% jump from the year before. The REPORT Act, which expanded mandatory reporting requirements, drove child sex trafficking reports from 8,480 in 2023 to 105,877 in 2025—a 1,100% increase that reflects better reporting, not necessarily more crime. But the AI-generated material is genuinely new: offenders can now produce realistic abuse imagery without ever touching a child, flooding detection systems with synthetic content that is harder to distinguish from real footage. That distinction matters for law enforcement, which must triage cases involving actual children versus AI fabrications—except both are illegal under US and UK law.

Legislators Push Back—But Platforms Drag Their Feet

California’s AB 1946 would require social media companies to perform biannual audits on how their design choices affect child safety risks, submit those audits to the attorney general, and reduce the window for acting on harmful material from 30 days to 48 hours. Any newly detected CSAM would need to be reviewed by a human moderator. Penalties from enforcement actions would fund a survivor support fund. The bill follows landmark trial verdicts in California and New Mexico in March, where Meta and YouTube were found liable for harm inflicted on children through platform design choices. The Meta and YouTube verdicts targeted addictive design features, not user-generated content—navigating around Section 230’s liability shield.

At the federal level, the Take It Down Act—which criminalizes creating and sharing nonconsensual intimate imagery including deepfakes—secured its first conviction in April 2026. An Ohio man, James Strahler II, was found guilty of producing AI-generated explicit images and videos of both adults and minors. Investigators found 2,400 images and videos on his phone and records showing he had downloaded more than 24 AI platforms and over 100 web-based AI models—echoing the broader problem of app stores promoting AI tools used for sexual exploitation. Separately, Rep. Ted Lieu introduced a bill that would strengthen penalties for distributing deepfake images and protect AI whistleblowers, building on the bipartisan House AI Task Force recommendations. California Governor Gavin Newsom also signed an executive order in March requiring companies seeking state contracts to demonstrate policies preventing AI from distributing CSAM and violent pornography.

NCMEC’s full 2025 Impact Report, with deeper analysis of these trends, is expected in the coming months.

Exit mobile version