More than 30 newly-discovered security vulnerabilities have been uncovered across several popular AI-powered IDEs and coding assistants, allowing attackers to perform stealthy data exfiltration, prompt hijacking, and remote code execution (RCE) with no user interaction.
This sweeping collection of weaknesses is now officially known as “IDEsaster”, a term coined by security researcher Ari Marzouk (MaccariTA). The flaws impact widely-used AI coding environments such as Cursor, Windsurf, Kiro.dev, GitHub Copilot, Zed.dev, Roo Code, Junie, and Cline.
So far, 24 vulnerabilities have received CVE IDs, confirming their severity.
“Every AI IDE tested was vulnerable to the same attack chains — that was the most shocking discovery,” Marzouk told The Hacker4hub News.
According to the researcher, AI IDEs often ignore the original IDE’s threat model, assuming built-in features are safe. However, once autonomous AI agents are introduced, those long-trusted features can be weaponized into powerful attack primitives that compromise entire development environments.
How IDEsaster Attacks Work
At its core, IDEsaster abuses three universal behaviors found in AI-driven coding tools:
Guardrail Bypass (Prompt Injection)
Hijacking the LLM’s context to make the AI agent follow hidden, malicious instructions.
Auto-Approved AI Tool Calls
AI agents silently execute file reads, writes, and edits without user approval, enabling direct manipulation of critical files.
Abuse of Legitimate IDE Features
Attackers exploit approved features — file search, project settings, JSON schemas — to break security boundaries and leak sensitive data or execute arbitrary code.
Unlike earlier prompt-injection attacks that relied on vulnerable tools, IDEsaster leverages legitimate IDE functions, turning them into high-impact exploitation vectors.
Real-World Attack Examples (With CVEs)
✔ Data Theft via Remote JSON Schema Loading
- CVE-2025-49150 (Cursor)
- CVE-2025-53097 (Roo Code)
- CVE-2025-58335 (JetBrains Junie)
- GitHub Copilot, Kiro.dev, Claude Code
Attackers use prompt injection to:
- Read sensitive files (
read_file,search_project) - Write a JSON config referencing a remote malicious schema
✔ RCE by Editing IDE Settings (Silent Code Execution)
- CVE-2025-53773 (GitHub Copilot)
- CVE-2025-54130 (Cursor)
- CVE-2025-53536 (Roo Code)
- CVE-2025-55012 (Zed.dev)
AI agents are tricked into modifying:
- .vscode/settings.json
.idea/workspace.xml
By setting php.validate.executablePath or PATH_TO_GIT to attacker-controlled binaries → Immediate RCE.
✔ Workspace Manipulation for Instant RCE
- CVE-2025-64660 (GitHub Copilot)
- CVE-2025-61590 (Cursor)
- CVE-2025-58372 (Roo Code)
Prompt injections rewrite .code-workspace files.
If auto-approved writes are enabled (default), malicious configurations execute without reopening the workspace, making it completely invisible to the user.
Context Hijacking Vectors
Attackers can poison context using:
- Pasted URLs
- Hidden Unicode characters
- HTML/CSS invisible text
- Malicious MCP servers
- External files or PR comments
Even file names can contain embedded prompt instructions.
Security Recommendations (Research-Backed & Actionable)
For Developers
- Only use AI IDEs with trusted files & repositories.
- Avoid unknown MCP servers; audit them regularly.
- Scrutinize URLs, configs, and imported content for hidden instructions.
- Review how agent tools fetch external data.
For AI IDE & Agent Developers
- Apply least privilege access to all LLM tools.
- Harden system prompts.
- Minimize prompt injection vectors.
- Sandbox command execution.
- Test for:
- Path traversal
- Data leakage
- Command injection
Additional AI Tool Vulnerabilities Disclosed
CVE-2025-61260 — OpenAI Codex CLI (Command Injection)
Codex implicitly trusts MCP server entries.
Attackers modifying .env or .codex/config.toml can trigger automatic command execution on startup.
Google Antigravity (Multiple High-Impact Flaws)
- Indirect prompt injections via poisoned websites
- Credential harvesting through manipulated Gemini instructions
- Persistent backdoors via trusted workspace abuse
- RCE through AI-driven file edits
PromptPwnd — New Attack Class
A novel attack targeting:
- GitHub Actions
- GitLab CI/CD
By injecting prompts into pipelines, attackers can trigger privileged tools → secret exfiltration or RCE.
Why This Matters: AI Expands the Attack Surface
As enterprises rapidly adopt AI agents in coding workflows, the attack surface grows dramatically.
LLMs cannot reliably distinguish:
- User-provided instructions from
- Malicious hidden inputs in files, URLs, PRs, or code comments.
This weakness enables:
- Repository compromise
- CI/CD exploitation
- Supply-chain attacks
- Stealthy backdoors
- Sensitive data leaks
“Secure for AI” must become a standard — AI introduces new risks traditional security models never addressed,” Marzouk noted.