Subscribe To Newsletters BETA THIS IS A BETA EXPERIENCE. OPT-OUT HERE More From Forbes
Last month, reports surfaced that some users of Character.AI, a Google-backed AI chatbot platform, created chatbots emulating real-life school shooters and their victims. These chatbots, accessible to users of all ages, allowed for graphic role-playing scenarios, sparking outrage and raising concerns about the ethical responsibilities of AI platforms in moderating harmful content.
In response to my request for comment, Character.AI provided a statement addressing the controversy:
"The users who created the Characters referenced in the Futurism piece violated our Terms of Service, and the Characters have been removed from the platform. Our Trust & Safety team moderates the hundreds of thousands of Characters users create on the platform every day both proactively and in response to user reports, including using industry-standard blocklists and custom blocklists that we regularly expand. We are working to continue to improve and refine our safety practices and implement additional moderation tools to help prioritize community safety."
The company also announced new measures aimed at enhancing safety for users under 18. These include filtering characters available to minors and narrowing access to sensitive topics such as crime and violence.
The controversy surrounding Character.AI comes at a time when AI is rapidly integrating into everyday life, especially for younger generations. This raises urgent questions about the regulatory frameworks—or lack thereof—governing AI technologies. Without clearer standards and stronger oversight, incidents like these will likely become more frequent.
Parents and guardians should also take note. Monitoring children's online activities, especially on platforms where content creation is largely user-driven, is more crucial than ever. Open conversations about the potential risks of interactive AI tools and setting boundaries for screen time are essential steps toward protecting young users.
The complexities of harmful digital interactions remind me of my work as a digital forensics expert on the cases of Dylann Roof and James Holmes, perpetrators of two of the most infamous mass shootings in U.S. history. Roof was convicted of murder charges in the 2015 Charleston church shooting, a racially motivated attack that claimed the lives of nine African American parishioners. Holmes orchestrated the 2012 Aurora theater shooting during a midnight screening of The Dark Knight Rises, killing 12 people and injuring 70 others.
My work on these cases involved far more than reviewing surface-level data; it required analyzing internet history, private chats, recovered deleted data, location history and broader social interactions. This data was provided to attorneys who then provided it to mental health experts for in-depth analysis.
When you forensically examine someone's phone or computer, in many ways, you are getting a look into their lives, and their minds. This is where AI falls short. While advanced algorithms can analyze vast amounts of data, they lack the depth of human investigation. AI cannot contextualize behaviors, interpret motives, or provide the nuanced understanding that comes from integrating multiple forms of evidence.
User-created school shooter chatbots are innately inaccurate because they rely on insufficient data, but their immersive nature can still wield considerable influence. Unlike static content, like reading a book or watching a documentary on a mass shooter, chatbots let users shape their interactions, which can intensify harmful behavior.
Furthermore, because AI companionship remains a relatively new phenomenon, its long-term effects are difficult to foresee, underscoring the need for caution when exploring these personalized and potentially hazardous digital experiences.
This raises critical questions: How do we balance technological progress with safety? What safeguards are sufficient to protect young and vulnerable users? And where does accountability lie when these systems fail?