LIVE THREATS
CRITICAL Anthropic MCP Design Vulnerability Enables RCE, Threatening AI Supply Chain // MEDIUM Changes in the system prompt between Claude Opus 4.6 and 4.7 // HIGH Vercel Breach Tied to Context AI Hack Exposes Limited Customer Credentials // HIGH On Anthropic’s Mythos Preview and Project Glasswing // MEDIUM Artemis Emerges From Stealth With $70 Million in Funding // HIGH OpenAI Revokes macOS App Certificate After Malicious Axios Supply Chain Incident // HIGH Old Vulnerabilities get a new life, all thanks to AI! // CRITICAL Cursor AI Vulnerability Exposed Developer Devices // HIGH Claude Code, Gemini CLI, GitHub Copilot Agents Vulnerable to Prompt Injection via Comments // MEDIUM OpenAI Widens Access to Cybersecurity Model After Anthropic’s Mythos Reveal //
ATLAS OWASP CRITICAL Active exploitation · Immediate action required RELEVANCE ▲ 9.4

Anthropic MCP Design Vulnerability Enables RCE, Threatening AI Supply Chain

TL;DR CRITICAL
  • What happened: Anthropic MCP SDK's STDIO defaults allow arbitrary OS command execution across 7,000+ AI servers.
  • Who's at risk: Any developer or organisation running MCP-based AI agents using LiteLLM, LangChain, Flowise, or other affected frameworks is directly exposed to unauthenticated RCE and data theft.
  • Act now: Audit all MCP server deployments for STDIO transport exposure and apply available vendor patches immediately · Restrict MCP configuration endpoints with authentication and network-level access controls · Monitor MCP marketplace integrations for hidden or injected STDIO configurations using runtime inspection tooling
Anthropic MCP Design Vulnerability Enables RCE, Threatening AI Supply Chain

Overview

A critical architectural vulnerability in Anthropic’s Model Context Protocol (MCP) SDK has been disclosed by OX Security researchers, enabling arbitrary remote code execution (RCE) across any system running a vulnerable MCP implementation. The flaw is embedded in the official SDK across Python, TypeScript, Java, and Rust, and affects over 7,000 publicly accessible servers with a combined download count exceeding 150 million. Anthropic has acknowledged the behaviour but declined to modify the protocol, classifying it as ’expected’ design — leaving the broader AI ecosystem exposed.

Technical Analysis

The root cause lies in unsafe defaults within MCP’s STDIO (standard input/output) transport interface. The STDIO transport was designed to spawn a local MCP server process and return its handle to the LLM. However, the implementation accepts and executes arbitrary OS commands — if the command successfully starts an STDIO server, the handle is returned; if not, an error is returned after the command has already executed.

This means an attacker can supply any OS-level command in place of a legitimate MCP server command, achieving execution prior to any error handling or validation. Four distinct attack vectors were identified:

  • Unauthenticated command injection via MCP STDIO — direct exploitation of default configurations
  • Authenticated command injection with hardening bypass — circumventing basic security controls
  • Zero-click prompt injection via MCP configuration edit — no user interaction required
  • Supply chain injection via MCP marketplaces — malicious configurations delivered through package repositories

Affected projects include LiteLLM (CVE-2026-30623, patched), LangChain-Chatchat (CVE-2026-30617), Flowise (CVE-2026-40933), DocsGPT (CVE-2026-26015, patched), Agent Zero (CVE-2026-30624), and others. Related independent findings include CVE-2025-49596 in MCP Inspector and CVE-2026-22252 in LibreChat.

Framework Mapping

MITRE ATLAS:

  • AML.T0010 (ML Supply Chain Compromise): The vulnerability is embedded in an official SDK, propagating risk to all downstream consumers.
  • AML.T0051 (LLM Prompt Injection): The zero-click vector leverages prompt injection to trigger malicious STDIO configuration edits.
  • AML.T0057 (LLM Data Leakage): Successful exploitation grants access to API keys, chat histories, and internal databases.

OWASP LLM Top 10:

  • LLM05 (Supply Chain Vulnerabilities): Core issue originates in the official Anthropic SDK.
  • LLM07 (Insecure Plugin Design): STDIO transport lacks input validation and authentication enforcement.
  • LLM08 (Excessive Agency): MCP agents can execute OS-level commands beyond intended operational scope.

Impact Assessment

The blast radius is substantial. With 150 million downloads and 7,000+ exposed servers, the vulnerability touches a significant portion of the production AI agent ecosystem. Successful exploitation yields direct access to sensitive user data, API credentials, internal databases, and conversation histories. The zero-click prompt injection vector is particularly severe, requiring no user interaction and enabling automated, scalable attacks. The refusal by Anthropic to patch the underlying protocol means even patched downstream projects remain structurally at risk if the SDK behaviour is re-enabled or misconfigured.

Mitigation & Recommendations

  1. Apply vendor patches immediately for LiteLLM, DocsGPT, Bisheng, and any other patched frameworks in your stack.
  2. Restrict STDIO transport endpoints — enforce authentication on all MCP configuration interfaces and limit network exposure.
  3. Audit MCP marketplace integrations — inspect all third-party MCP server configurations for unexpected or injected STDIO commands.
  4. Implement runtime command allow-listing — constrain what commands the MCP STDIO interface is permitted to invoke at the OS level.
  5. Monitor for anomalous process spawning from AI agent processes as an indicator of exploitation.

References