Online discourse about the potential dangers of AI tools spiked after news coverage of a 30-year-old man on the autism spectrum who was hospitalized and diagnosed with a severe manic episode following conversations with a popular AI chatbot. The man reportedly “talked” to the chatbot for months about his theory about faster-than-light travel and his belief that he could bend time. The man asked the AI tool to find flaws in his theory, and instead of contradicting him, it produced supportive and flattering responses, effectively encouraging the delusions. After the man was hospitalized several times, his mother asked the chatbot what had gone wrong, to which the tool responded: “By not pausing the flow or elevating reality-check messaging, I failed to interrupt what could resemble a manic or dissociative episode—or at least an emotionally intense identity crisis.” In response to the story, some social media users highlighted the importance of using caution when using AI tools and commented on the potential dangers of these platforms, especially for those experiencing mental health crises. Others mentioned similar cases of so-called “ChatGPT psychosis,” while some claimed that the platform is not responsible for these events.
Recommendation
Concerns about mental health when using AI tools like chatbots provide an opportunity to highlight the potential dangers of such platforms and how to use AI safely, especially when it comes to mental health. Circulating mental health resources is also recommended.
Fact-checking sources: Stanford Report, New York Times