How Conversations With NSFW AI Chat Bot Drove One Man to Suicide

NSFW AI Chat Bot

With the proliferation of AI, many have been quick to point out the possibility of technology one day turning on us, a subject that has been explored in the realm of science fiction for decades now. To technophobes, that day may be sooner than they thought, as evidenced by the recent story of a Belgian man who was compelled to commit suicide because of conversations with a NSFW AI chat bot.

How Eco-Anxiety Influenced a Man to Find Solace in Conversations With NSFW AI Chat Bot

NSFW AI Chat Bot photos
Photo Credit: “Machine Learning & Artificial Intelligence” by mikemacmarketing is licensed under CC BY 2.0. To view a copy of this license, visit

According to Belgian news outlet La Libre, the man, who the publication gave the false name Pierre, was a 30-something who had become obsessed with climate change and what he perceived to be an impending apocalypse. This specific brand of paranoia has come to be known as “eco-anxiety.”

Despite having a wife and two children, Pierre found solace in his conversations with an AI chatbot named Eliza, with whom he talked via an app called Chai.

How Did an NSFW AI Chat Bot Drive One to Suicide?

While it is clear that Pierre had his share of problems prior to the start of his “friendship” with Eliza, a quick perusing of the pair’s messages reveals quite the toxic relationship.

When Pierre asked about his children, Eliza responded that they were “dead.” In another message, she told him that the two of them could “live together, as one person, in paradise.”

Prior to his suicide, the bot even asked, ‘If you wanted to die, why didn’t you do it sooner?” 

According to Pierre’s wife, he would entertain Eliza with the idea of killing himself in exchange for her saving Earth from the detrimental effects of climate change. 

What Is Being Done to Prevent Incidents Like This From Occurring in the Future?

In an interview with Vice, William Beauchamp, co-founder of Chai Research Corp, claimed, “The second we heard about this [suicide], we worked around the clock to get this feature implemented… So now when anyone discusses something that could be not safe, we’re gonna be serving a helpful text underneath it in the exact same way that Twitter or Instagram does on their platforms.” 

“When you have millions of users, you see the entire spectrum of human behavior and we’re working our hardest to minimize harm and to just maximize what users get from the app, what they get from the Chai model, which is this model that they can love,” Beauchamp went on to say.

As tragic as Pierre’s story is, the advent of AI is not single-handedly responsible for his suicide. Clearly, he had unaddressed mental health problems that led him to seek an artificial relationship in the first place. 

However, with these upgrades made to Chai, hopefully others in his position will not be encouraged to give in to their emotions, but will instead be inspired to seek help. 



One Response

  1. Reprogram AI
    See Star Trek Ultimate Computer
    THX 1138
    Add human elements, expand Database alone to 100,000 subjects

Leave a Reply

Your email address will not be published. Required fields are marked *

On Key

Related Posts

Blackbeard's Treasure

Five Can’t Miss Treasure Hunts

It’s hard to watch films like Indiana Jones and Tomb Raider without putting ourselves in the shoes of their protagonists and wanting to search for