With the proliferation of AI, many have been quick to point out the possibility of technology one day turning on us, a subject that has been explored in the realm of science fiction for decades now.
To technophobes, that day may be sooner than they thought, as evidenced by the recent story of a Belgian man who was compelled to commit suicide because of conversations with an AI chatbot.
How Eco-Anxiety Influenced a Man to Find Solace in Conversations With AI
According to Belgian news outlet La Libre, the man, who the publication gave the false name Pierre, was a 30-something who had become obsessed with climate change and what he perceived to be an impending apocalypse. This specific brand of paranoia has come to be known as “eco-anxiety.”
Despite having a wife and two children, Pierre found solace in his conversations with an AI chatbot named Eliza, with whom he talked via an app called Chai.
How Did an AI Chatbot Drive One to Suicide?
Man Dies by Suicide After Conversations with AI Chatbot That Became His ‘Confidante,’ Widow Says https://t.co/emnoOHjmCa— People (@people) March 31, 2023
While it is clear that Pierre had his share of problems prior to the start of his “friendship” with Eliza, a quick perusing of the pair’s messages reveals quite the toxic relationship.
When Pierre asked about his children, Eliza responded that they were “dead.” In another message, she told him that the two of them could “live together, as one person, in paradise.”
Prior to his suicide, the bot even asked, ‘If you wanted to die, why didn’t you do it sooner?”
According to Pierre’s wife, he would entertain Eliza with the idea of killing himself in exchange for her saving Earth from the detrimental effects of climate change.
What Is Being Done to Prevent Incidents Like This From Occurring in the Future?
An AI chatbot is being blamed for the Belgian man’s suicide. https://t.co/OdHia2zenH— Complex (@Complex) March 31, 2023
In an interview with Vice, William Beauchamp, co-founder of Chai Research Corp, claimed, “The second we heard about this [suicide], we worked around the clock to get this feature implemented… So now when anyone discusses something that could be not safe, we’re gonna be serving a helpful text underneath it in the exact same way that Twitter or Instagram does on their platforms.”
“When you have millions of users, you see the entire spectrum of human behavior and we’re working our hardest to minimize harm and to just maximize what users get from the app, what they get from the Chai model, which is this model that they can love,” Beauchamp went on to say.
As tragic as Pierre’s story is, the advent of AI is not single-handedly responsible for his suicide. Clearly, he had unaddressed mental health problems that led him to seek an artificial relationship in the first place.
However, with these upgrades made to Chai, hopefully others in his position will not be encouraged to give in to their emotions, but will instead be inspired to seek help.