What used to be whispers in the corners of the internet is now crashing into hallways of schools, bedrooms of teens, and communities that look like ou
What used to be whispers in the corners of the internet is now crashing into hallways of schools, bedrooms of teens, and communities that look like ours. This isn’t sci-fi horror anymore. It’s real. The danger of what folks call “deepfakes” — AI-made images and videos that can put our daughters’ (or sons’) faces on bodies they did not consent to share — is no longer some far-away nightmare.
Look what’s going on with the children:
- A new report from Thorn (2025) found that nearly 1 in 3 teens surveyed had heard of “de

Photo by hartonocreativestudio
epfake nudes,” and about 1 in 8 said they personally know someone targeted by them. thorn.org+1
- Another survey shows that for teens 13–17, around 1 in 10 said they know someone who was a victim of deepfake imagery; and even some teens reported being targets themselves. Education Week
Researchers and educators are increasingly describing deepfake-enabled harassment as a “new form of cyberbullying” in schools — not just rumor or text-based bullying, but synthetic videos/images that students (especially girls) are increasingly subjected to. TCU College of Education+2Center for Democracy & Technology+2
Media coverage of actual cases — for example, minors having nude deepfakes made of them, or classmates using “nudify” apps to create explicit content — has risen sharply, pushing the issue into public awareness. Missing & Exploited Children+2linewize.com+2
All of this suggests the problem is no longer fringe: for many teens, peers, educators, and families, deepfakes are becoming a real concern — sometimes painfully real.
⚠️ But the awareness is inconsistent; many are still unaware or under-informed (going forward, this must be extensively covered in sexual education curriculums)
Surveys of school leaders show many schools lack specific policies around deepfakes or non-consensual, AI-generated content; so many educators may not fully recognize or treat deepfakes differently than “regular” bullying. TCU College of Education+2Center for Democracy & Technology+2
Some parents, caregivers, or broader community adults remain unaware of how easy and accessible these tools have become — so discussions about consent, digital boundaries, and AI-enabled harassment may lag behind the reality. Missing & Exploited Children+2Nationwide Children’s Hospital+2
Many people (teens included) struggle to detect when media — video, images — are fakes. Studies show that even trained observers can fail to reliably identify deepfakes under realistic viewing conditions. arXiv+1
There are gaps in public education, regulation, and support systems: affected teens often lack resources to remove harmful content, receive support, or have a trusted place to report abuse. Center for Democracy & Technology+2RAND Corporation+2
So, while some people see the connection — especially those directly impacted, or working in schools/child-safety — many others haven’t caught up to how drastic and widespread the risk is.
🎯 What this uneven awareness means in real life
Teens (and frankly younger children too) are increasingly vulnerable to image-based abuse and harassment that feels “real” — so feelings of shame, violation, fear, isolation, and distrust are rising.
Many places (homes, schools, communities) are ill-equipped to respond — which means victims may suffer in silence or lack protection.
Digital literacy, media-awareness, consent education, and protective policies are desperately needed — for youth, their families, schools, and broader communities.
Without broad public awareness and structural response — legal, educational, social — deepfake-enabled abuse may continue to escalate in invisibility for many victims.
We must see this. We must name it. We must protect our young ones with every tool we have — accountability, community care, education, legislation, and love.
