Ceccanti had been communicating with OpenAIs chatbot for a few years. He used it initially as a tool to brainstorm ways to build a path to low-cost housing for his community in Clatskanie, Oregon, but eventually turned to it as a confidante. He would spend 12 hours a day typing to the bot, according to his wife. He had cut himself off from it after she, along with his friends, realized he was spiraling into beliefs that were detached from reality.
He was not a depressed person, Fox said, as she sat on the couch in their living room with tears trickling down her face. Ceccanti never discussed suicide with the bot, according to his chat logs, viewed by the Guardian. Fox believes her husband suffered a crisis after quitting ChatGPT after prolonged use. Which tells me that this thing is not just dangerous to people with depression, its dangerous to anybody, she said. He returned to the bot in the months leading up to his death and quit again just days prior.
Ceccantis case is extreme, but as hundreds of millions of people turn to AI chatbots, more and more edge cases of AI-induced delusions are emerging. There are nearly 50 cases of people in the US who have had mental health crises after or during their conversations with ChatGPT, of whom nine were hospitalized and three died, according to a New York Times report. Its difficult to understand the scale of the problem, but OpenAI itself estimates that more than a million people every week show suicidal intent when chatting with ChatGPT.