New data on suicide risks among ChatGPT users sparks online debate

Medium Impact

On October 27, OpenAI released data indicating that more than a million ChatGPT users each week show “explicit indicators of potential suicidal planning or intent” during conversations with the AI chatbot. The disclosure followed news that the parents of a teen who died by suicide are suing OpenAI, claiming that ChatGPT contributed to their son’s death. Social media posts highlighted concerns about the risks of relying on unregulated AI tools for mental health support, with many users arguing that human-led interventions remain safer and more effective. Others argued that AI chatbots can help address barriers to care such as cost, provider shortages, and long wait times.

Recommendation

Ongoing debate about AI use in mental health support provides an opportunity to share accurate information about both its potential benefits and its risks. Given that cost and accessibility are frequently cited as common barriers to receiving care, communicators may share resources for free/low-cost, online, and local mental health resources. Messaging may also highlight warning signs of suicide and offer guidance on helping young people stay safe while using social media and AI tools

Fact-checking sources: AAP, Psychology Today

Communication resources: Find talking points and tools to communicate about mental health

Latest Alerts