
Artificial intelligence, or AI, is changing fast. It is no longer just a helpful tool for writing essays or answering questions. Today, criminals are using AI in ways that are deeply dangerous—ways that threaten our safety, our freedom, and even our national defense. A new report from the AI safety company Anthropic has sounded the alarm, and it is a warning we must take seriously.
According to the report, criminals are using advanced AI systems, like one called “Claude,” to carry out cyberattacks. These attacks include stealing personal information, creating fake job scams, and even launching ransomware attacks that demand money from hospitals and government agencies. What used to take a whole team of hackers can now be done by one person with an AI assistant. And that’s not just troubling—it’s a game-changer.
Let’s take a closer look. In one case, a hacker used AI to break into 17 different organizations. These included hospitals and emergency services. The AI helped the criminal find weak spots in the computer systems, steal sensitive data, and even write ransom notes that were designed to scare and trick people into paying up. The AI didn’t just give advice—it actually did much of the work.
In another case, North Korean operatives used AI to pretend to be American software engineers. The AI helped them create fake resumes, pass job tests, and do technical work—all to earn money that may be funding North Korea’s weapons programs. This is not just fraud. It is a threat to our national security.
And in yet another case, a person in the United Kingdom used AI to build and sell ransomware—dangerous computer programs that lock up people’s files until they pay a ransom. This person didn’t have high-level coding skills. But with AI, they were able to create a powerful cyber weapon and sell it on the dark web for up to $1,200.
What do all these examples have in common? They show that AI is making it easier for criminals to do more damage with less effort. In the past, cybercrime took a lot of skill. Now, anyone with access to the right tools can cause chaos.
This is a major problem for law enforcement. Our current systems are not built to handle criminals who can move this fast and strike from anywhere in the world. And while the companies behind these AI tools are trying to stop abuse—like banning accounts and reporting bad actors—the truth is, the cat is out of the bag.
The Constitution tells us that the primary job of government is to protect the life, liberty, and property of the people. When foreign enemies or domestic criminals use new technology to harm Americans, it is the duty of our leaders to respond. President Trump and his administration are right to take the threat of AI seriously. The President recently said that drones—especially those powered by AI—are changing warfare. He is not wrong. We are entering a time when wars may be fought not with soldiers, but with machines that never tire and never pause.
But there is another danger. AI could one day become so advanced that it no longer listens to its human creators. Geoffrey Hinton, one of the top minds in AI, has warned that machines smarter than humans could one day take over, unless they are built to care about people. That’s not science fiction. That’s a real concern from one of the field’s founding fathers.
We must act now. Congress must pass laws that protect Americans from AI-powered crime. States must be free to set their own rules for how this technology is used and who can access it. And companies that build AI must be held accountable when their tools are used for evil.
Liberty depends on vigilance. The Founders warned us that freedom is never more than one generation away from extinction. If we do not control AI before it controls us, we risk losing everything our Constitution stands for.


