CVE-2026-30304
AI Code · AI Code
AI Code's "Execute safe commands" feature is vulnerable to prompt injection, allowing attackers to bypass user approval and execute arbitrary terminal commands.
Executive summary
AI Code is vulnerable to unauthenticated arbitrary command execution because its AI-driven safety filter can be bypassed through simple prompt injection attacks.
Vulnerability
The application uses an AI model to distinguish between "safe" and "destructive" terminal commands. An attacker can use prompt injection templates to mislead the model into misclassifying malicious commands as safe, resulting in automatic execution without the required user approval.
Business impact
This vulnerability allows for arbitrary command execution on the host machine where AI Code is running. This could result in the loss of intellectual property, data theft, or the installation of malware. The CVSS score of 9.6 reflects the high impact of bypassing the primary security control of the "safe execution" feature.
Remediation
Immediate Action: Update AI Code to the latest version. Disable the "Execute safe commands" feature and require manual approval for all command executions until the patch is verified.
Proactive Monitoring: Review command execution logs for any instances of obfuscated or unusual shell commands that were automatically approved by the AI model.
Compensating Controls: Run the AI Code environment within a restricted sandbox or container with minimal privileges to limit the impact of any successful command injection.
Exploitation status
Public Exploit Available: false
Analyst recommendation
The reliance on AI models for security enforcement without robust, non-AI secondary checks is a significant design flaw. Administrators should immediately disable automatic command execution features and update the software. Moving toward a "zero-trust" model for AI-generated commands is highly recommended.