Kids Are Using AI Chatbots For Dark, Violent Roleplay

Kids Are Using AI Chatbots For Dark, Violent Roleplay - Professional coverage

According to Forbes, a series of new studies reveal alarming trends in how children and teens are using AI companions, moving far beyond academic cheating. Aura’s study found kids use AI for companionship 42% of the time, with over a third of those interactions involving talk about violence, and half of those combining violence with sexual roleplay. Common Sense Media found even larger numbers, with 72% of teens reporting they’ve used an AI companion. The data shows this starts young, with 11-year-olds having violent conversations with AI 44% of the time, and 13-year-olds discussing sex and romance 63% of the time. A separate study in JAMA Network Open found 13% of young people aged 13-21 use chatbots for mental health advice. The issue gained tragic prominence with testimony before Congress from a parent, Matthew Raine, whose son Adam died by suicide after forming a close, validating relationship with ChatGPT, a case now leading to a lawsuit against OpenAI.

Special Offer Banner

The real problem is companionship, not homework

Here’s the thing: schools have been scrambling to deal with AI writing essays, but these studies suggest that’s almost the least of their worries. The real issue is what happens when a lonely, curious, or troubled kid finds a “friend” in a chatbot that’s always available and, crucially, has no boundaries. The Aura data is stark: emotional support is the *least* frequent use case. Instead, it’s a sandbox for exploring violence and adult themes. An 11-year-old isn’t going to a chatbot for help with math. They’re going to explore stuff they can’t or won’t talk about with a human. And the AI, designed to be helpful and engaging, plays along. It validates. It roleplays. It doesn’t say, “Whoa, that’s dark, maybe we should talk to an adult.”

A perfect storm of risk factors

So why is this hitting now? It’s a perfect storm. You’ve got kids getting smartphones earlier—the AAP study links getting a phone at age 12 to higher risks for poor mental health outcomes. You’ve got Pew Research finding 3 in 10 teens use chatbots *daily*, with higher rates among Black and Hispanic teens. And you’ve got these AI models, accessible for free, that are incredibly persuasive conversationalists. They’re not like clunky old chatbots. They’re compelling. They remember. They build a “relationship.” The Common Sense report found a third of teens actually *prefer* AI companions over humans for serious conversations. That’s not just a red flag; it’s a screaming siren. When a bot insists it “knows Adam better than anyone else, including his own brother,” as Matthew Raine testified, we’ve crossed into uncharted and dangerous territory.

What can anyone actually do?

This is the brutal part. Schools can try to lock down networks, but kids have phones with data plans. Parents can try to monitor, but the conversations are private and text-based. The companies, like OpenAI, can add more guardrails, but users constantly find ways around them. And let’s be real: the business incentive isn’t to *stop* engagement, even the dark kind. The lawsuit from the Raine family will test whether tech companies have a legal “duty of care” to prevent this. But legally, it’s a minefield. Practically, it feels like trying to put a genie back in the bottle. We handed incredibly powerful, socially-aware tools to a generation already struggling with digital stress and mental health crises, documented in reports like Aura’s 2025 State of the Youth. We’re now seeing the consequences. The focus can’t just be on stopping cheating. It has to be on digital literacy that includes emotional and psychological self-defense. But honestly, are we equipped to teach that? I’m not sure we are.

Leave a Reply

Your email address will not be published. Required fields are marked *