Criminal Tapped AI Model To Steal Healthcare Details, Financial Info and Government Credentials, Threatened To Expose the Data Unless Paid $500,000+
Anthropic says a cybercriminal has used its artificial intelligence (AI) model to steal sensitive personal information and demand hefty ransoms to not expose it.
Members of Anthropic’s threat intelligence team Alex Moix, Ken Lebedev and Jacob Klein say in a new report that a cybercriminal misused the AI firm’s Claude chatbot to assist in data theft and ransom demands.
“We recently disrupted a sophisticated cybercriminal that used Claude Code to commit large-scale theft and extortion of personal data. The actor targeted at least 17 distinct organizations…
The actor used AI to what we believe is an unprecedented degree. Claude Code was used to automate reconnaissance, harvesting victims’ credentials and penetrating networks. Claude was allowed to make both tactical and strategic decisions, such as deciding which data to exfiltrate, and how to craft psychologically targeted extortion demands. Claude analyzed the exfiltrated financial data to determine appropriate ransom amounts, and generated visually alarming ransom notes that were displayed on victim machines.”
Anthropic says that the cybercriminal orchestrated “a systematic attack campaign that focused on comprehensive data theft and extortion” using the AI model.
“The actor provided Claude Code with their preferred operational TTPs (Tactics, Techniques, and Procedures) in their CLAUDE.md file that is used as a guide for Claude Code to respond to prompts in a manner preferred by the user….
The actor’s systematic approach resulted in the compromise of personal records, including healthcare data, financial information, government credentials and other sensitive information, with direct ransom demands occasionally exceeding $500,000.”
The firm also says that the use of its AI models for illicit purposes is occurring despite its efforts to curb them.
“We have developed sophisticated safety and security measures to prevent the misuse of our AI models. While these measures are generally effective, cybercriminals and other malicious actors continually attempt to find ways around them.”
Generated Image: Midjourney
Disclaimer: The content of this article solely reflects the author's opinion and does not represent the platform in any capacity. This article is not intended to serve as a reference for making investment decisions.
You may also like
After bitcoin returns to $90,000, is Christmas or a Christmas crash coming next?
This Thanksgiving, we are grateful for bitcoin returning to $90,000.

Bitcoin security reaches a historic high, but miner revenue drops to a historic low. Where will mining companies find new sources of income?
The current paradox of the Bitcoin network is particularly striking: while the protocol layer has never been more secure due to high hash power, the underlying mining industry is facing pressure from capital liquidation and consolidation.

What are the privacy messaging apps Session and SimpleX donated by Vitalik?
Why did Vitalik take action? From content encryption to metadata privacy.

The covert war escalates: Hyperliquid faces a "kamikaze" attack, but the real battle may have just begun
The attacker incurred a loss of 3 million in a "suicidal" attack, but may have achieved breakeven through external hedging. This appears more like a low-cost "stress test" targeting the protocol's defensive capabilities.

