Schools across the globe are grappling with a new form of abuse: students use AI to create fake nude images of classmates, and those images spread fast, cause lasting harm, and are often treated as child sexual abuse material.
AI “nudify” tools are getting easier to use, and students are turning ordinary social media photos into sexualized fakes. The trend is spreading in schools worldwide, with perpetrators usually male students sharing images via social apps and messaging chains.
Data collected from incidents shows almost 90 schools in 28 countries have reported cases, affecting at least 600 students. In North America alone there are nearly 30 reported deepfake sexual abuse cases since 2023, including one case with more than 60 alleged victims and others where pupils at multiple schools were targeted at the same time.
The findings show that since 2023, schoolchildren—most often boys in high schools—in at least 28 countries have been accused of using generative AI to target their classmates with sexualized deepfakes. The explicit imagery, containing minors, is considered to be child sexual abuse material (CSAM). This analysis is believed to be the first to review real-world cases of AI deepfake abuse taking place at schools globally.
As a whole, the analysis shows the worldwide reach of harmful AI nudification technology, which can earn their creators millions of dollars per year, and shows that in many incidents, schools and law enforcement officials are often not prepared to respond to the serious sexual abuse incidents.
Across North America, there have been nearly 30 reported deepfake sexual abuse cases since 2023—including one with more than 60 alleged victims, one where the victim was temporarily expelled from school, and others where pupils at multiple schools have allegedly been targeted simultaneously. More than 10 cases have been publicly reported in South America, more than 20 across Europe, and another dozen in Australia and East Asia combined.
The reach and speed of sharing makes the damage immediate and hard to contain, and victims often feel humiliated and fearful the images will follow them for years. Schools and police departments frequently lack the training and protocols to respond to AI-driven sexual abuse, leaving survivors without clear support or recourse.
Experts warn that when minors are depicted, the images can be classified as child sexual abuse material (CSAM), which raises legal stakes and complicates how schools handle investigations. The emotional toll is severe: images can be saved, reshared, and resurface later even after adults believe the matter is closed.
WIRED and Indicator's analysis exposed nearly 90 schools and 600 students worldwide scarred by AI-generated deepfake nudes, a crisis already entrenched across borders. The spread continues unchecked, exploiting open-source tools and lax platform moderation. Most probable path:… https://t.co/BOA4uUda0Q
— U.S.A.I. 🇺🇸 (@researchUSAI) April 15, 2026
Reports to the CyberTipline show an explosive rise in cases tied to AI-generated sexual images, jumping from 4,700 reports in 2023 to 67,000 in 2024, and then to 440,000 in the first half of 2025 alone. Many of those reports involve students whose ordinary social photos were turned into fake nudes without any consent.
Some districts are reacting by tightening rules on sharing student photos and revising acceptable-use policies for devices and apps. Those policy moves are often reactive, though, because lawmakers and school leaders are scrambling to catch up with technology that evolves faster than statutes and district guidelines.
By 2025, more than half of U.S. states had enacted laws targeting the creation and distribution of realistic AI-generated images and audio, but enforcement and awareness vary widely. State-level changes help, but they do not replace immediate school-based responses that prioritize victim safety and evidence preservation.
Cases show the range of consequences: in one state four boys were charged in juvenile court for creating fake nude images of 44 girls from social media photos, prompting the district to draft new rules. In other incidents victims have been temporarily removed from school or have seen harassment spread to multiple campuses.
Teachers and administrators say prevention must include digital literacy and a culture that discourages sharing explicit content, but many educators feel unequipped to lead that work. Parents, too, are learning that even casual photo-sharing can expose kids to risks when AI tools can turn images into something abusive in minutes.
Technology firms and policymakers are debating technical fixes, takedown processes, and criminal penalties, but survivors still need immediate support: counseling, clear reporting channels, and protocols that stop further sharing. The core problem remains: cheap, accessible AI tools plus adolescent cruelty equals a threat that schools were not built to handle.




