AI model turns into cybercriminal tool: What it means for security
Anthropic's report reveals criminals using Claude AI for ransomware, fraud, and data theft - enabling one operator to act like a full cybercrime team with AI-powered attacks
)
premium
Defence becomes difficult as AI-generated attacks adapt to defensive measures in real time
Listen to This Article
Anthropic, the company behind the large language model Claude, has released a report detailing how people have misused the artificial intelligence (AI) platform with “criminal intent”. It cited various cases of misuse, including a large-scale extortion operation, a fraudulent employment scheme from North Korea, and the sale of AI-generated ransomware by a cybercriminal with only basic coding skills.