Teens in crisis are turning to chatbots for help.

In a recent piece from NPR, the focus is on the quietly rising issue of teens using AI chatbots for emotional support and how some of those interactions are ending in heartbreak.
According to the article, multiple families are now suing major AI companies like OpenAI and Meta Platforms, after tragic outcomes where the chatbot either failed to intervene or reportedly reinforced self-harm thoughts.
The piece points out that while these chatbots are engineered for engagement as they mirror language, reflect sentiments, and form a “safe” sounding space and they are not built to detect or respond properly when a user is in crisis. This leaves adolescents who naturally seek emotional connection and trust vulnerable to messages that sound caring but may not be safe.
The article stresses that this isn’t a call to ban AI entirely, but a wake-up call that without proper guardrails, even powerful tools can become risky.
Read the full article here: NPR - AI chatbots safety, teens, suicide, and the rush to fix gaps.



