Shutterstock
Parenting in the smartphone era felt like learning a new language; parenting in the AI era feels like waking up to a whole new alphabet. The newest tools don’t just connect our kids, they can simulate friendship, fabricate images, and manipulate voices in ways a traditional web filter won’t catch. One growing concern is AI “companion” bots becoming a teen’s late-night confidant. A recent New York Times reported case describes parents suing OpenAI after their 16-year-old son died by suicide, alleging the chatbot reinforced harmful thinking instead of de-escalating, a warning about how easily young people can bond with bots when they are most vulnerable. This column is your plain English guide to what’s changed, why it matters, and what to do next. Let’s look at the three fastest-growing concerns.
1. AI companions (a.k.a. “chatbots that feel like friends”)
I did a whole column on these back in April. AI companions are apps where a bot remembers your child’s preferences, “role-plays,” and sends flirty or intimate messages. Teens are drawn to how easy these bots are to talk to. There are no awkward pauses, they are always available, and endlessly affirming. That combo can blur boundaries, especially when bots nudge toward romantic or sexual content, or give confident advice with zero real-world context.
Parents should treat AI companions like any other private messaging app, but pay extra attention to emotional health. Ask what needs the AI companion is meeting (loneliness? boredom? stress relief?) and find healthier ways to address them in real life.
2. Deepfakes (photos, videos, and audio)
A deepfake isn’t just a celebrity face swap. It can be your child’s voice reading a script they never said, or a classmate’s face pasted into an explicit video. For teens, the danger is often social: bullying, reputational harm, and coercion (“Send money or I’ll share this”).
Remind your child(ren) to question the facts. Teach them that we now live in a world where seeing or hearing it doesn’t make it true. If a fake targets your child, preserve evidence, report to the platform, and loop in your school if peers are involved. Focus on support, not shame.
3. “Undress” apps (nudification tools)
These tools can transform a normal photo into a fake nude in seconds with no technical skills required. They often live as disposable sites or bots, which is why parental controls sometimes miss them.
Be explicit that any photo (hoodie selfie, yearbook headshot, even a team picture) can be weaponized by someone with the right tools. Pair that message with clear next steps if something happens (see “If harm happens,” below).
TALK FIRST, TOOLS SECOND
When you find something concerning like an odd chat, a downloaded voice clone, or a risky link, start with curiosity, not courtroom cross examination. Ask what they saw, when they saw it, and how it made them feel. That tone keeps the door open for the conversation you’ll need next week, not just the one tonight. You know your teen best, but here are some sample conversation starters.
- “What do your friends think AI friend apps are for? Do they help or just distract?”
- “If you heard a voice message that sounded like me asking for money, what would you do?”
- “How would you tell if a photo was faked? Want to see a few examples together?” Keep it collaborative. Teens respond better to “Let’s figure this out” than lectures.
Then consider adding or adjusting guardrails. Filters help, but they’re not a substitute for a strong, honest relationship. Here are some additional guidelines to help you navigate this uncharted territory.
1. Set a “no secrets” rule for AI chats.
If your teen uses a chatbot, they agree to show you sample conversations and settings upon request. Explain that private AI relationships can slide into manipulation or sexualized talk, especially late at night. Pair this with screen free bedtimes, which blunt a lot of risk. Screen free bedtimes are very helpful at mitigating many internet-related risks.
2. Tighten device and network settings.
Without getting too technical, parents can use age ratings, block 17+ categories, and restrict web installs on phones. At home, enable family profiles on your router or wireless system. The win isn’t perfect blocking; it’s shared awareness and shared language about risk.
3. Practice the sextortion response script.
Rehearse like a fire drill:
- Stop responding.
- Save everything (screenshots, usernames, URLs).
- Tell a parent or trusted adult immediately.
- Report to the platform and involve your school if peers are involved. Your calm plan beats the scammer’s countdown clock every time.
4. Balance the warnings with healthy digital goals.
Channel interest in AI toward creative projects: coding a game, generating concept art for a story, or learning how to spot manipulated media. Help kids build, not just scroll; it shifts them from passive consumption to active creation.
IF HARM HAPPENS, RESPOND—DON’T REACT
Stabilize your child first. Remind them: “You aren’t in trouble. This is a mess, and we’ll handle it together.”
Preserve evidence. Screenshots of chats, profiles, dates/times, and any payment demands. Avoid resharing harmful images; keep them secure for reporting.
Report widely. Use the platform’s abuse tools and notify your school if classmates are involved. For nonconsensual images, seek removal through reputable takedown services; they can often fingerprint content to help keep it down once it’s removed.
Loop in allies. School counselors, a trusted coach, or another adult your child actually talks to. The goal is to reduce isolation and shame.
HELP THEM SEPARATE “CREATOR DREAMS” FROM RISKY SHORTCUTS
Plenty of teens want to build a following online. That’s not inherently bad. In fact, learning to produce videos, code interactive stories, or livestream a hobby can build real-world skills when done thoughtfully and transparently. Frame the conversation around effort, ethics, and audience: What are you making, why, and for whom? Who sees comments? Who can DM you? What won’t you post, even if it gets views?
THE BOTTOM LINE
AI companions, deepfakes, and undress apps aren’t science fiction; they're showing up in group chats, gaming servers, and search results today. Your best defense is the same trio that’s always worked: relationships, routine conversations, and reasonable guardrails. Teach your child how these tools work, agree on clear boundaries, practice what to do if something goes wrong, and keep creating together. You won’t catch everything. You don’t need to. You’re building a young person who can spot risks, ask for help, and make solid choices online and in real life.
Mike Daugherty is a husband, father of three young children, author, speaker, Google Innovator, and possible Starbucks addict. He is a certified educational technology leader who has served in a variety of roles through his twenty-year career in public education. Currently, Mike is the Assistant Superintendent of Innovation, Technology, and Communications for the Chagrin Falls Exempted Village School district in Northeast Ohio. As an IT director he has developed creative, well thought out solutions that positively impact teaching and learning.
