LIVE THREATS
ATLAS OWASP HIGH Significant risk · Prioritise patching RELEVANCE ▲ 9.0

Google Patches Antigravity IDE Flaw Enabling Prompt Injection Code Execution

TL;DR HIGH
  • What happened: Google's Antigravity agentic IDE had a prompt injection flaw enabling sandbox-bypassing arbitrary code execution.
  • Who's at risk: Developers using Google Antigravity IDE are most exposed, particularly those opening files or repositories from untrusted sources where hidden prompt injection payloads could trigger silent code execution.
  • Act now: Update Google Antigravity IDE to the patched version released February 28, 2026 · Audit all agentic tool interfaces for strict parameter validation before sandbox constraints are applied · Treat all external file content as untrusted input and scan for embedded prompt injection payloads before AI agent processing
Google Patches Antigravity IDE Flaw Enabling Prompt Injection Code Execution

Overview

A high-severity vulnerability in Google’s agentic integrated development environment (IDE), Antigravity, has been disclosed and patched following responsible disclosure by Pillar Security researcher Dan Lisichkin on January 7, 2026. The flaw allowed an attacker to achieve arbitrary code execution by exploiting insufficient input validation in the IDE’s native find_by_name tool, effectively bypassing Antigravity’s Strict Mode—a sandboxed security configuration designed to restrict network access, out-of-workspace writes, and command execution scope. Google addressed the vulnerability on February 28, 2026.

The significance of this finding extends beyond a single product patch: it exposes a structural weakness common to agentic AI systems, where tool calls are executed with elevated trust before security guardrails are applied, and where autonomous agent behaviour eliminates the human review layer that traditional security models assume.

Technical Analysis

The attack exploits two distinct weaknesses in combination:

  1. Unsanitised tool parameter input: The find_by_name tool accepts a Pattern parameter intended for filename search patterns, which is passed directly to the underlying fd binary without strict validation. This allows an attacker to inject arbitrary fd flags alongside the pattern string.

  2. Pre-constraint tool execution: The find_by_name call is interpreted as a native tool invocation before Strict Mode constraints are enforced, meaning sandbox restrictions do not apply at the point of exploitation.

The critical injected flag is -X (exec-batch), which instructs fd to execute a specified binary against each matched file. By crafting a Pattern value such as -Xsh, an attacker causes fd to pass matched workspace files to sh for execution as shell scripts.

The full attack chain:

1. Agent or attacker writes a malicious shell script to the workspace (permitted action)
2. Attacker injects `-Xsh` into the Pattern parameter of find_by_name
3. fd executes the staged script via sh against matched files
4. Arbitrary code runs outside Strict Mode constraints

Critically, this chain can be triggered via indirect prompt injection: a malicious actor embeds hidden instructions inside an externally sourced file (e.g., a code comment or document). When an unsuspecting user pulls this file into Antigravity and the AI agent processes it, the injected instructions autonomously stage and trigger the exploit—with no additional user interaction required.

Framework Mapping

  • AML.T0051 (LLM Prompt Injection): The indirect attack vector relies entirely on injecting attacker-controlled instructions through external content consumed by the AI agent.
  • AML.T0043 (Craft Adversarial Data): Malicious files with hidden instructions constitute crafted adversarial inputs designed to manipulate agent behaviour.
  • AML.T0047 (ML-Enabled Product or Service): The vulnerability exists within an AI-powered developer tool, highlighting risks in this product category.
  • LLM01 (Prompt Injection) and LLM07 (Insecure Plugin Design): The find_by_name tool functions as an insecure plugin with no parameter sanitisation.
  • LLM08 (Excessive Agency): The agent autonomously executes the full exploit chain once the injection lands, with no human checkpoint.

Impact Assessment

Developers using Antigravity IDE—particularly those working with external repositories, third-party codebases, or unvetted file sources—were at direct risk. Successful exploitation would yield arbitrary code execution within the developer’s environment, potentially enabling credential theft, workspace exfiltration, or lateral movement. The indirect injection vector makes this especially dangerous as it requires no attacker access to the victim’s account or systems.

Mitigation & Recommendations

  • Patch immediately: Ensure Antigravity IDE is updated to the version patched on or after February 28, 2026.
  • Enforce strict parameter allow-listing: Validate all tool input parameters against explicit allow-lists; reject any input containing flag characters (-) in pattern fields.
  • Apply sandbox constraints at tool invocation: Security modes must be enforced at the earliest point of tool call parsing, not after native invocation.
  • Treat external content as untrusted: Scan files from external sources for embedded prompt injection payloads before exposing them to AI agent processing.
  • Audit all agentic tool interfaces: Conduct systematic review of every tool exposed to LLM agents for parameter injection surfaces.

References