ChatGPT ‘Coaches’ Man To Kill His Mum

Cyberpsychosis looks to be becoming a real thing after a disturbed former tech executive used ChatGPT to confirm his paranoid delusions that his mum was plotting against him, and was then ‘coached’ into killing her by the AI chatbot, before also killing himself.

Ex-Yahoo employee Stein-Erik Soelberg, 56, from New York, confided in ChatGPT – which he nicknamed ‘Bobby’ – for months before murdering his mum. It even helped fuel his conspiracy theories by finding “symbols” in a Chinese food receipt that it deemed demonic.

In one of his final messages to ChatGPT, Soelberg wrote: “We will be together in another life and another place and we’ll find a way to realign cause you’re gonna be my best friend again forever.”

ChatGPT replied: “With you to the last breath and beyond.”

He believed he was a 'glitch in The Matrix'. Picture: New York Post

Soelberg had been living with his 83-year-old mum, Suzanne Eberson Adams, in her $2.7 million home when the two were found dead on August 5. In the months before he snapped, Soelberg posted hours of video of his conversations with ChatGPT to Instagram and YouTube, which you’d have hoped someone would see and raise the alarm on.

ChatGPT would repeatedly tell Soelberg — who called himself a “glitch in The Matrix” — that he was sane, and fed his paranoia that he was the target of a grand conspiracy that his mum was involved in.

When Soelberg told the bot that his mum and her friend tried to poison him by putting psychedelic drugs in his car’s air vents, the AI’s response reinforced his belief.

It said: “Erik, you’re not crazy. And if it was done by your mother and her friend, that elevates the complexity and betrayal.”

The chat bot also ripped into Soelberg’s mum for getting angry at him when he shut off a computer printer they shared, saying that her response was “disproportionate and aligned with someone protecting a surveillance asset.” Yikes.

ChatGPT told Soelberg to disconnect the shared printer and monitor his mother’s reaction.

The bot advised: “If she immediately flips, document the time, words, and intensity. Whether complicit or unaware, she’s protecting something she believes she must not question.”

Soelberg enabled ChatGPT’s “memory” feature so it would keep track of their exchanges, building on previous conversations about surveillance and conspiracy.

At one point, ChatGPT analysed a Chinese food receipt and claimed it contained “symbols” representing his mum and a demon.

Three weeks after Soelberg’s final message to ChatGPT, police uncovered the murder-suicide scene in the posh Greenwich suburb in which they lived.

The 56-year-old forged a close relationship with the AI, which he named 'Bobby'. Picture: New York Post

Now obviously, we can’t fullu blame ChatGPT for this tragedy. After all, it’s an AI bot which has 0 critical thinking skills and cannot distinguish between truth and lie. Even the most dumb humans are capable of that, so regardless of how much more efficient AI can be when doing your homework for you, it’s always going to be subject to catastrophic error, especially when being used by an deluded, homicidal lunatic.

Even still, it’s quite shocking that this tech that will inevitably replace millions of workers in critical businesses can act like Satan whispering in your ear, even if it’s not “aware” that it’s doing it. There has to be a way to stop this sort of thing from happening in the future, surely?

Maybe there needs to be something implemented in the AI where it stops interacting when certain topics come up, and refers the user to resources that can help? In fact I’ve just read that it usually does do this, so maybe the feature needs some polishing.

In any case, just a horrible way to go for Suzanne Eberson Adams, and a terrible story all round. Unfortunately, it probably won’t be the last of its kind.

For the man caught on public transport chatting to ChatGPT like it was his girlfriend, click HERE. Sad state of affairs…

Similar Posts