Dubai, United Arab Emirates – Anthropic, a US-based company specializing in developing artificial intelligence, said in its threat intelligence report that cybercriminals are increasingly using artificial intelligence to launch cyber attacks.
The company added that its chatbot, Cloud, was used illegally to hack networks. This illustrates how artificial intelligence can be misused.
In addition to stealing and analyzing data, and formulating “psychologically directed blackmail demands”.
In some cases, attackers threatened to publish stolen information unless they received amounts exceeding $500,000.
The company said that over the past month alone, 17 institutions in the healthcare, government, and religious sectors were targeted through advanced AI methods.
Claude helped the attackers identify security vulnerabilities. It determined the target network and what data should be extracted using artificial intelligence.
Anthropic CEO Jacob Klein told The Verge that such operations previously required specialized teams of experts.
But artificial intelligence now allows only one person to launch sophisticated attacks.
Anthropic also documented cases of North Korean elements using the chatbot “Cloud.” They impersonated programmers working remotely for American companies to “finance North Korean weapons programs.”
Artificial intelligence helped them communicate with employers and perform tasks they lacked the skills to accomplish on their own.
Historically, North Korean workers have had to go through years of training for this purpose, Anthropic said.
But Claude and other models have effectively removed this limitation through the power of AI.
Criminals have also created AI-powered scam plans for sale online.
Among them is a bot on the Telegram application used in emotional scams. It emotionally manipulates victims in multiple languages to extort money from victims.
Anthropic confirmed that these are indeed preventative measures to curb abuse. However, attackers are still trying to find ways around them, often by leveraging artificial intelligence.
Anthropic said the lessons learned from these incidents are being used to enhance protection against AI-powered cybercrime.