Artificial intelligence (AI) company Anthropic has warned that its chatbot Claude is being used by bad actors to help carry out online crimes, despite built-in protections designed to prevent abuse.
The company said criminals are using Claude not only for technical advice, but also to emotionally pressure victims, in a method it refers to as “vibe hacking”.
In an August 28 report titled Threat Intelligence, Anthropic’s security team, including researchers Ken Lebedev, Alex Moix, and Jacob Klein, explained that “vibe hacking” involves using AI tools to manipulate people’s emotions, gain their trust, and influence their decisions.

Did you know?
Subscribe – We publish new crypto explainer videos every week!
Proof of Work vs Proof of Stake: Which is Better? (ANIMATED)
For example, one hacker reportedly used Claude to help steal private information from 17 different targets, including hospitals, public safety agencies, government offices, and religious groups. The hacker then asked victims for payments in Bitcoin
Claude was used to review stolen financial documents, suggest the amount of ransom to be demanded from each victim, and write personalized messages aimed at creating stress or urgency.
Although the attacker’s access to Claude was eventually revoked, the company noted that the situation showed how much easier it has become for people with limited knowledge to create effective malware and avoid detection.
The report also mentioned a separate case involving North Korean IT workers. Anthropic stated that these individuals used Claude to create false identities and pass job interviews for roles at major US tech firms, including some on the Fortune 500 list.
On August 13, ZachXBT revealed how a North Korean hacking group used fake identities and freelance job platforms to secure crypto-related roles. What did the blockchain investigator say? Read the full story.