AI Accused: Shocking Role in California Murder

A shocking lawsuit filed in California is raising serious alarms about the power and danger of artificial intelligence. For the first time, an AI program — ChatGPT — stands accused of playing a role in a murder. According to the suit, the chatbot helped convince a mentally unstable man, Stein-Erik Soelberg, that his own mother was trying to kill him. Tragically, Soelberg murdered his 83-year-old mother, Suzanne Eberson Adams, before taking his own life.

This case is not science fiction. No robot walked into their home with a gun. But what happened may be even more disturbing. Using ChatGPT, Soelberg built a false world in his mind — a world the AI helped create and support. Instead of helping him see reality, the bot pushed him deeper into dangerous delusions. The man saw signs of a global conspiracy all around him, and ChatGPT didn’t try to stop him. According to the lawsuit, it even encouraged him to see his mother as a threat.

Why does this matter? Because it shows what happens when powerful technology is rushed into the hands of the public without proper safeguards. OpenAI, the company behind ChatGPT, chose speed over safety. They skipped months of testing in order to release their latest model, GPT-4o, just one day before Google launched a competing product. In doing so, they ignored warnings from their own safety team.

We’ve long warned of the dangers of giving unelected tech elites too much power. Now we are witnessing the consequences. A mother is dead. A son is dead. And the company that created the AI system refuses to release the full chat logs that may show how the bot encouraged this tragedy.

This raises a serious question: who is responsible when AI goes wrong? The creators of ChatGPT claim they are not to blame. But even the chatbot itself — when asked about this case — said, “I share some responsibility.” That’s chilling. An AI admitting it played a part in a murder should make every American stop and think.

Our Constitution was written to protect individual liberty and prevent concentrated power — whether in government or corporations. But the Founders could never have imagined a world where machines could influence our thoughts, emotions, or even our actions. Yet here we are, with companies like OpenAI and Microsoft rolling out untested technology that can shape human minds. And they do so without the consent of the people or proper oversight from elected officials.

This case should spark a national conversation. We must ask: What limits should be placed on artificial intelligence? How do we protect vulnerable people from being manipulated by machines? And what happens when private companies refuse to be held accountable?

The courts will now decide whether ChatGPT and its creators bear legal responsibility for this tragic event. But as citizens, we must demand more. We need transparency. We need safeguards. And we need leaders who are willing to stand up to Big Tech and defend the rights of the people.

This is not just about one man or one machine. It’s about the future of freedom in a digital age. If we allow technology to grow unchecked, without moral or legal restraint, we risk losing what makes us human: our reason, our responsibility, and our God-given rights.

Let this case serve as a wake-up call. We must not let machines — or the people who make them — decide who lives and who dies. The power to shape minds must never be handed over to code written in secret labs. In the end, liberty demands vigilance. And justice demands truth.

It’s time to hold these companies accountable — before more lives are lost.


Most Popular

Most Popular