Security experts have revealed serious security vulnerabilities in Claude Code that could turn Anthropic’s AI coding assistant into an attack tool. According to Check Point Research, malicious repositories could execute arbitrary code and steal developers’ API keys simply by being opened.
The researchers identified three flaws that abused configuration files and environment variables. These weaknesses allowed hidden commands to run when users cloned and launched attacker-controlled projects.
How Claude Code Security Vulnerabilities
The first of the Claude Code security vulnerabilities involved a bypass of user consent. When developers opened Claude Code in a new directory, untrusted project hooks could execute commands without explicit approval. Anthropic addressed this issue in version 1.0.87.
The second flaw allowed repositories to override user approval during tool initialisation. By manipulating configuration files, attackers could run shell commands automatically, as Claude Code started.
Read: Anthropic Accuses Chinese AI Labs of Claude Model Distillation
The third issue involved information disclosure. If a malicious repository set a custom API endpoint through an environment variable, Claude Code could send authenticated requests before displaying a trust prompt. This created a risk of API key exposure without user interaction.
In practical terms, simply opening a crafted repository could redirect API traffic to attacker-controlled infrastructure and capture credentials.
Why This Impacts AI Security
Check Point researchers noted that AI-powered development tools blur traditional security boundaries. Configuration files no longer act as passive context; instead, they influence execution and network behaviour.
This shift expands supply chain risks. In AI-assisted workflows, opening an untrusted project may trigger unintended actions, even if no manual code execution occurs.
All identified vulnerabilities have been patched. Developers should ensure they are running Claude Code version 2.0.65 or later for full protection.
Security experts also recommend exercising caution when cloning or opening unknown repositories. In AI-driven environments, standard trust assumptions may no longer apply.