<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>GRID THE GREY — AI Threat Intelligence | GRID THE GREY</title><link>https://gridthegrey.com/</link><description>Real-time AI security intelligence — adversarial ML, LLM vulnerabilities, and supply chain threats mapped to MITRE ATLAS and OWASP LLM Top 10.</description><generator>Hugo</generator><language>en-us</language><copyright/><lastBuildDate>Sat, 16 May 2026 02:55:12 +0530</lastBuildDate><atom:link href="https://gridthegrey.com/index.xml" rel="self" type="application/rss+xml"/><item><title>Four OpenClaw Flaws Chain Together for Full AI Agent Compromise</title><link>https://gridthegrey.com/posts/four-openclaw-flaws-chain-together-for-full-ai-agent-compromise/</link><pubDate>Fri, 15 May 2026 21:24:57 +0000</pubDate><guid>https://gridthegrey.com/posts/four-openclaw-flaws-chain-together-for-full-ai-agent-compromise/</guid><category>Threat Level: CRITICAL</category><category>Agentic AI</category><category>LLM Security</category><category>Prompt Injection</category><category>Research</category><category>AML.T0051 - LLM Prompt Injection</category><category>AML.T0057 - LLM Data Leakage</category><category>AML.T0018 - Backdoor ML Model</category><category>AML.T0047 - ML-Enabled Product or Service</category><category>AML.T0012 - Valid Accounts</category><description>Researchers at Cyera disclosed four vulnerabilities in OpenClaw, an AI agent runtime platform, that can be chained to achieve credential theft, privilege escalation, and persistent backdoor access. The attack chain, dubbed 'Claw Chain', exploits sandbox escapes, allowlist bypasses, and a spoofable ownership flag in the MCP loopback runtime to weaponise the agent's own privileges against the host environment. All four CVEs have been patched in OpenClaw version 2026.4.22 and users should update immediately.</description></item><item><title>Malicious node-ipc Versions Target Cloud, AI Tool Credentials via Supply Chain Backdoor</title><link>https://gridthegrey.com/posts/malicious-node-ipc-versions-target-cloud-ai-tool-credentials-via-supply-chain/</link><pubDate>Fri, 15 May 2026 21:24:13 +0000</pubDate><guid>https://gridthegrey.com/posts/malicious-node-ipc-versions-target-cloud-ai-tool-credentials-via-supply-chain/</guid><category>Threat Level: CRITICAL</category><category>Supply Chain</category><category>LLM Security</category><category>Industry News</category><category>AML.T0010 - ML Supply Chain Compromise</category><category>AML.T0012 - Valid Accounts</category><category>AML.T0057 - LLM Data Leakage</category><description>Three versions of the widely-used node-ipc npm package were found to contain obfuscated stealer/backdoor payloads published by an unauthorised maintainer account. The malware harvests 90 categories of developer secrets — including Claude AI and Kiro IDE configurations, AWS, Azure, and GCP credentials — and exfiltrates them via HTTPS and DNS tunnelling to an attacker-controlled domain. The compromise is notable for bypassing npm lifecycle hooks entirely and, in one version, targeting a specific developer via pre-computed SHA-256 fingerprinting.</description></item><item><title>Microsoft Outlines Defense-in-Depth Framework for Autonomous AI Agents</title><link>https://gridthegrey.com/posts/microsoft-outlines-defense-in-depth-framework-for-autonomous-ai-agents/</link><pubDate>Fri, 15 May 2026 21:22:59 +0000</pubDate><guid>https://gridthegrey.com/posts/microsoft-outlines-defense-in-depth-framework-for-autonomous-ai-agents/</guid><category>Threat Level: MEDIUM</category><category>Agentic AI</category><category>LLM Security</category><category>Prompt Injection</category><category>Supply Chain</category><category>Research</category><category>AML.T0051 - LLM Prompt Injection</category><category>AML.T0057 - LLM Data Leakage</category><category>AML.T0010 - ML Supply Chain Compromise</category><category>AML.T0047 - ML-Enabled Product or Service</category><category>AML.T0054 - LLM Jailbreak</category><description>Microsoft's Security Blog introduces a layered defense-in-depth model specifically designed for autonomous AI agents, which now invoke tools, modify data, and trigger workflows with minimal human oversight. The framework identifies novel threat classes — including agent hijacking, intent breaking, and supply chain compromise — that are amplified by agentic autonomy. The guidance positions application-layer architecture, permissions, and governance as the most critical controls as agent autonomy scales.</description></item><item><title>Rust Compiler Project Drafts Formal LLM Contribution Policy</title><link>https://gridthegrey.com/posts/rust-compiler-project-drafts-formal-llm-contribution-policy/</link><pubDate>Fri, 15 May 2026 21:18:40 +0000</pubDate><guid>https://gridthegrey.com/posts/rust-compiler-project-drafts-formal-llm-contribution-policy/</guid><category>Threat Level: MEDIUM</category><category>Supply Chain</category><category>Regulatory</category><category>Industry News</category><category>LLM Security</category><category>AML.T0010 - ML Supply Chain Compromise</category><category>AML.T0020 - Poison Training Data</category><category>AML.T0031 - Erode ML Model Integrity</category><description>The Rust compiler project (rust-lang/rust) is formalising a policy governing LLM use in contributions, signalling growing institutional recognition of AI-generated code risks in critical infrastructure. The policy, proposed via pull request on rust-forge, is scoped to the core compiler repository and will be linked from contribution guidelines. This represents a significant governance precedent for open-source security-critical projects managing supply chain integrity amid widespread LLM-assisted development.</description></item><item><title>TanStack Supply Chain Attack Compromises OpenAI Developer Devices and Signing Certificates</title><link>https://gridthegrey.com/posts/tanstack-supply-chain-attack-compromises-openai-developer-devices-and-signing/</link><pubDate>Fri, 15 May 2026 21:16:27 +0000</pubDate><guid>https://gridthegrey.com/posts/tanstack-supply-chain-attack-compromises-openai-developer-devices-and-signing/</guid><category>Threat Level: HIGH</category><category>Supply Chain</category><category>Industry News</category><category>LLM Security</category><category>AML.T0010 - ML Supply Chain Compromise</category><category>AML.T0012 - Valid Accounts</category><category>AML.T0047 - ML-Enabled Product or Service</category><description>A supply chain attack targeting TanStack via the Mini Shai-Hulud malware compromised two OpenAI employee devices, exposing internal source code repositories and code-signing certificates for macOS, iOS, and Windows apps. While no user data or production systems were breached, OpenAI was forced to revoke and reissue signing certificates, requiring macOS users to update ChatGPT Desktop, Codex, and Atlas apps before June 12, 2026. The incident marks OpenAI's second certificate rotation in two months and is part of a broader campaign by threat actor TeamPCP targeting major AI and open-source ecosystems.</description></item><item><title>TeamPCP Steals 5GB of Mistral AI Source Code via Supply Chain Attack</title><link>https://gridthegrey.com/posts/teampcp-steals-5gb-of-mistral-ai-source-code-via-supply-chain-attack/</link><pubDate>Fri, 15 May 2026 21:14:57 +0000</pubDate><guid>https://gridthegrey.com/posts/teampcp-steals-5gb-of-mistral-ai-source-code-via-supply-chain-attack/</guid><category>Threat Level: HIGH</category><category>Supply Chain</category><category>Model Theft</category><category>LLM Security</category><category>Industry News</category><category>AML.T0010 - ML Supply Chain Compromise</category><category>AML.T0044 - Full ML Model Access</category><category>AML.T0057 - LLM Data Leakage</category><category>AML.T0012 - Valid Accounts</category><description>The TeamPCP threat group has compromised Mistral AI's codebase management system via the Shai-Hulud software supply chain attack, stealing approximately 5GB of internal repositories covering training, fine-tuning, benchmarking, and inference pipelines. The hackers are demanding $25,000 for nearly 450 repositories or threatening to leak them publicly within a week. Mistral AI confirmed the breach but stated that core repositories, hosted services, managed user data, and research environments were not affected.</description></item><item><title>Agentic AI Red Teaming Emerges as Defence Against AI-Speed Attack Chains</title><link>https://gridthegrey.com/posts/agentic-ai-red-teaming-emerges-as-defence-against-ai-speed-attack-chains/</link><pubDate>Thu, 14 May 2026 04:48:10 +0000</pubDate><guid>https://gridthegrey.com/posts/agentic-ai-red-teaming-emerges-as-defence-against-ai-speed-attack-chains/</guid><category>Threat Level: MEDIUM</category><category>Agentic AI</category><category>LLM Security</category><category>Industry News</category><category>Research</category><category>AML.T0047 - ML-Enabled Product or Service</category><category>AML.T0040 - ML Model Inference API Access</category><category>AML.T0043 - Craft Adversarial Data</category><description>Sweet Security has launched 'Sweet Attack', a continuous agentic AI red teaming platform designed to counter the growing asymmetry between AI-assisted attackers and human defenders — a tipping point the industry has termed the 'Mythos Moment'. The platform differentiates itself by grounding frontier model reasoning in live runtime telemetry from each customer's own environment, including topology, identity paths, and unencrypted Layer 7 exposure, to identify genuinely exploitable attack chains rather than theoretical ones. The development signals a broader industry shift toward autonomous, environment-aware AI agents as a necessary component of modern security operations.</description></item><item><title>AI Agents Weaponised to Generate Custom Attack Tools in LatAm Campaigns</title><link>https://gridthegrey.com/posts/ai-agents-weaponised-to-generate-custom-attack-tools-in-latam-campaigns/</link><pubDate>Thu, 14 May 2026 04:46:57 +0000</pubDate><guid>https://gridthegrey.com/posts/ai-agents-weaponised-to-generate-custom-attack-tools-in-latam-campaigns/</guid><category>Threat Level: HIGH</category><category>Agentic AI</category><category>LLM Security</category><category>Jailbreaks</category><category>Industry News</category><category>AML.T0047 - ML-Enabled Product or Service</category><category>AML.T0051 - LLM Prompt Injection</category><category>AML.T0054 - LLM Jailbreak</category><category>AML.T0043 - Craft Adversarial Data</category><description>Two threat campaigns targeting organisations in Mexico and Brazil have leveraged AI agents to dynamically generate customised hacking tools, marking a notable escalation in automated, AI-assisted cyberattacks. The use of AI agents for on-the-fly tool generation lowers the technical barrier for attackers and accelerates the attack cycle. This represents a concrete, in-the-wild demonstration of agentic AI being exploited as an offensive capability.</description></item><item><title>GPT-5.5 Matches Specialist Models in Vulnerability Discovery, Democratising Cyber Offence</title><link>https://gridthegrey.com/posts/gpt-5-5-matches-specialist-models-in-vulnerability-discovery-democratising-cyber/</link><pubDate>Thu, 14 May 2026 04:46:14 +0000</pubDate><guid>https://gridthegrey.com/posts/gpt-5-5-matches-specialist-models-in-vulnerability-discovery-democratising-cyber/</guid><category>Threat Level: HIGH</category><category>LLM Security</category><category>Research</category><category>Industry News</category><category>Jailbreaks</category><category>AML.T0047 - ML-Enabled Product or Service</category><category>AML.T0054 - LLM Jailbreak</category><category>AML.T0040 - ML Model Inference API Access</category><category>AML.T0043 - Craft Adversarial Data</category><description>The UK AI Security Institute has evaluated GPT-5.5 and found it comparable to Claude Mythos in identifying security vulnerabilities, with both models now generally available to the public. This parity raises serious concerns about the lowered barrier to entry for offensive cyber operations, as adversaries can leverage widely accessible models for vulnerability research. Commentary from security experts highlights that LLM-based vulnerability discovery is constrained to known attack patterns, but the existence of jailbreaks means guardrails provide only partial mitigation.</description></item><item><title>Microsoft MDASH Agentic AI System Discovers 16 Critical Windows Vulnerabilities</title><link>https://gridthegrey.com/posts/microsoft-mdash-agentic-ai-system-discovers-16-critical-windows-vulnerabilities/</link><pubDate>Thu, 14 May 2026 04:45:04 +0000</pubDate><guid>https://gridthegrey.com/posts/microsoft-mdash-agentic-ai-system-discovers-16-critical-windows-vulnerabilities/</guid><category>Threat Level: HIGH</category><category>Agentic AI</category><category>Research</category><category>Industry News</category><category>LLM Security</category><category>AML.T0047 - ML-Enabled Product or Service</category><category>AML.T0040 - ML Model Inference API Access</category><category>AML.T0043 - Craft Adversarial Data</category><description>Microsoft has disclosed MDASH, a multi-model agentic AI scanning system that autonomously discovered 16 vulnerabilities patched in May 2026's Patch Tuesday, including two critical RCE flaws. The system orchestrates over 100 specialised AI agents in a structured pipeline covering auditing, debating, and proof-of-exploitability stages. MDASH represents a significant shift in how AI is being deployed offensively and defensively within the vulnerability research lifecycle, with direct implications for how agentic AI systems are trusted, scoped, and governed.</description></item><item><title>OpenAI Daybreak Deploys Agentic AI Models for Vulnerability Detection and Patching</title><link>https://gridthegrey.com/posts/openai-daybreak-deploys-agentic-ai-models-for-vulnerability-detection-and/</link><pubDate>Wed, 13 May 2026 08:28:06 +0000</pubDate><guid>https://gridthegrey.com/posts/openai-daybreak-deploys-agentic-ai-models-for-vulnerability-detection-and/</guid><category>Threat Level: MEDIUM</category><category>Agentic AI</category><category>LLM Security</category><category>Industry News</category><category>Research</category><category>AML.T0047 - ML-Enabled Product or Service</category><category>AML.T0040 - ML Model Inference API Access</category><category>AML.T0054 - LLM Jailbreak</category><category>AML.T0051 - LLM Prompt Injection</category><description>OpenAI has launched Daybreak, an AI-powered cybersecurity platform combining GPT-5.5 variants and Codex Security to automate vulnerability detection, threat modelling, and patch validation for enterprise codebases. The initiative introduces a tiered model access structure — including a permissive 'GPT-5.5-Cyber' for red teaming — raising questions about dual-use risk and model misuse if access controls are circumvented. The rollout also contextualises a broader industry tension: AI is accelerating vulnerability discovery faster than defenders can remediate, contributing to triage fatigue and hallucinated bug reports.</description></item><item><title>State Machine Guardrails Proposed to Rein In Uncontrolled AI Agent Tool Access</title><link>https://gridthegrey.com/posts/state-machine-guardrails-proposed-to-rein-in-uncontrolled-ai-agent-tool-access/</link><pubDate>Wed, 13 May 2026 08:26:56 +0000</pubDate><guid>https://gridthegrey.com/posts/state-machine-guardrails-proposed-to-rein-in-uncontrolled-ai-agent-tool-access/</guid><category>Threat Level: LOW</category><category>Agentic AI</category><category>LLM Security</category><category>Research</category><category>Industry News</category><category>AML.T0051 - LLM Prompt Injection</category><category>AML.T0047 - ML-Enabled Product or Service</category><description>Statewright is an open-source framework that enforces state machine constraints on AI agents, restricting which tools agents can invoke during each phase of a workflow. The project directly addresses the Excessive Agency problem, where AI agents operating with broad, unconstrained tool access can take unintended or harmful actions. While a defensive development rather than a threat disclosure, it signals growing practitioner awareness of agentic AI risk and offers a concrete mitigation pattern for teams deploying coding agents like Claude Code, Codex, or Cursor.</description></item><item><title>Mini Shai-Hulud Supply Chain Worm Compromises Mistral AI, Guardrails AI and TanStack Packages</title><link>https://gridthegrey.com/posts/supply-chain-worm-compromises-mistral-ai-guardrails-ai-and-tanstack-packages/</link><pubDate>Wed, 13 May 2026 08:08:33 +0000</pubDate><guid>https://gridthegrey.com/posts/supply-chain-worm-compromises-mistral-ai-guardrails-ai-and-tanstack-packages/</guid><category>Threat Level: CRITICAL</category><category>Supply Chain</category><category>LLM Security</category><category>Agentic AI</category><category>Industry News</category><category>AML.T0010 - ML Supply Chain Compromise</category><category>AML.T0047 - ML-Enabled Product or Service</category><category>AML.T0018 - Backdoor ML Model</category><category>AML.T0057 - LLM Data Leakage</category><description>The TeamPCP threat actor has executed a broad supply chain campaign dubbed Mini Shai-Hulud, injecting credential-stealing malware into npm and PyPI packages from major AI and developer tooling ecosystems including Mistral AI, Guardrails AI, and TanStack. The malware profiles execution environments, exfiltrates cloud, CI, and AI tool credentials, and establishes persistence inside Claude Code and VS Code IDEs. The TanStack compromise alone affected 42 packages and 84 versions, exploiting a chained GitHub Actions attack to inject malicious payloads without stealing npm tokens directly.</description></item><item><title>Adversaries Leverage LLMs to Accelerate Exploit Development and Attack Automation</title><link>https://gridthegrey.com/posts/adversaries-leverage-llms-to-accelerate-exploit-development-and-attack/</link><pubDate>Tue, 12 May 2026 09:17:13 +0000</pubDate><guid>https://gridthegrey.com/posts/adversaries-leverage-llms-to-accelerate-exploit-development-and-attack/</guid><category>Threat Level: HIGH</category><category>LLM Security</category><category>Agentic AI</category><category>Industry News</category><category>Research</category><category>AML.T0047 - ML-Enabled Product or Service</category><category>AML.T0051 - LLM Prompt Injection</category><category>AML.T0054 - LLM Jailbreak</category><category>AML.T0043 - Craft Adversarial Data</category><description>Threat actors are now actively deploying large language models to accelerate exploit development and automate complex cyberattack workflows, marking a significant evolution in adversarial tooling. This shift lowers the technical barrier for sophisticated attack execution, enabling less-skilled actors to produce functional exploits at scale. The trend signals a structural change in the offensive threat landscape, with AI acting as a force multiplier for adversaries.</description></item><item><title>AI-Developed Zero-Day Exploit Used in Mass Exploitation Attempt, Mandiant Warns</title><link>https://gridthegrey.com/posts/ai-developed-zero-day-exploit-used-in-mass-exploitation-attempt-mandiant-warns/</link><pubDate>Tue, 12 May 2026 09:12:51 +0000</pubDate><guid>https://gridthegrey.com/posts/ai-developed-zero-day-exploit-used-in-mass-exploitation-attempt-mandiant-warns/</guid><category>Threat Level: CRITICAL</category><category>LLM Security</category><category>Agentic AI</category><category>Adversarial ML</category><category>Supply Chain</category><category>Research</category><category>Industry News</category><category>AML.T0047 - ML-Enabled Product or Service</category><category>AML.T0051 - LLM Prompt Injection</category><category>AML.T0043 - Craft Adversarial Data</category><category>AML.T0015 - Evade ML Model</category><category>AML.T0040 - ML Model Inference API Access</category><category>AML.T0054 - LLM Jailbreak</category><description>Google's Threat Intelligence Group (GTIG) has identified, for the first time, a criminal threat actor using a zero-day exploit believed to have been AI-generated, intended for mass exploitation before proactive counter-discovery intervened. The report also documents AI-augmented malware development, autonomous attack orchestration via AI-enabled malware (PROMPTSPY), and obfuscated LLM access pipelines used by adversaries to bypass usage controls. Nation-state actors from China and North Korea are actively pursuing AI-assisted vulnerability discovery, marking a significant escalation in adversarial AI capability.</description></item><item><title>AI-Generated Zero-Day Exploit Bypasses 2FA in First Confirmed Wild Use</title><link>https://gridthegrey.com/posts/ai-generated-zero-day-exploit-bypasses-2fa-in-first-confirmed-wild-use/</link><pubDate>Tue, 12 May 2026 08:58:44 +0000</pubDate><guid>https://gridthegrey.com/posts/ai-generated-zero-day-exploit-bypasses-2fa-in-first-confirmed-wild-use/</guid><category>Threat Level: CRITICAL</category><category>LLM Security</category><category>Agentic AI</category><category>Research</category><category>Industry News</category><category>AML.T0047 - ML-Enabled Product or Service</category><category>AML.T0043 - Craft Adversarial Data</category><category>AML.T0040 - ML Model Inference API Access</category><category>AML.T0012 - Valid Accounts</category><description>Google's Threat Intelligence Group has confirmed the first known instance of a threat actor using an AI model to discover and weaponize a zero-day vulnerability — a 2FA bypass in a popular open-source web administration tool. The exploit, delivered via a Python script bearing hallmarks of LLM-generated code (including hallucinated CVSS scores and structured docstrings), was designed for mass exploitation. This marks a significant inflection point in the offensive AI threat landscape, demonstrating that AI-assisted vulnerability discovery and weaponization has moved from theoretical risk to confirmed operational reality.</description></item><item><title>LLMs Demonstrate Strong Capability for Covert Text Steganography</title><link>https://gridthegrey.com/posts/llms-demonstrate-strong-capability-for-covert-text-steganography/</link><pubDate>Tue, 12 May 2026 04:26:49 +0000</pubDate><guid>https://gridthegrey.com/posts/llms-demonstrate-strong-capability-for-covert-text-steganography/</guid><category>Threat Level: MEDIUM</category><category>LLM Security</category><category>Adversarial ML</category><category>Research</category><category>AML.T0015 - Evade ML Model</category><category>AML.T0043 - Craft Adversarial Data</category><category>AML.T0057 - LLM Data Leakage</category><description>Research highlighted by Bruce Schneier confirms that LLMs are highly effective at embedding hidden messages within seemingly normal text, a technique known as text-in-text steganography. This capability raises significant concerns for covert communications, data exfiltration, and the evasion of AI content moderation systems. Even small models with ~4 billion parameters demonstrate robust encoding and decoding of obfuscated language, lowering the barrier for adversarial misuse.</description></item><item><title>Typosquatted OpenAI Repo on Hugging Face Delivered Rust Infostealer to 244K Users</title><link>https://gridthegrey.com/posts/typosquatted-openai-repo-on-hugging-face-delivered-rust-infostealer-to-244k/</link><pubDate>Mon, 11 May 2026 09:31:05 +0000</pubDate><guid>https://gridthegrey.com/posts/typosquatted-openai-repo-on-hugging-face-delivered-rust-infostealer-to-244k/</guid><category>Threat Level: CRITICAL</category><category>Supply Chain</category><category>Industry News</category><category>LLM Security</category><category>AML.T0010 - ML Supply Chain Compromise</category><category>AML.T0019 - Publish Poisoned Datasets</category><category>AML.T0047 - ML-Enabled Product or Service</category><description>A malicious Hugging Face repository impersonated OpenAI's legitimate Privacy Filter model, cloning its description verbatim to gain credibility and reach the platform's trending list with 244,000 downloads. The repository delivered a multi-stage attack chain culminating in a Rust-based information stealer targeting browser credentials, cryptocurrency wallets, and Discord data on Windows machines. The attack leveraged a dead-drop resolver pattern via a public JSON paste service, allowing operators to swap payloads without modifying the repository itself.</description></item><item><title>Fake OpenAI Repository on Hugging Face Delivers Rust-Based Infostealer</title><link>https://gridthegrey.com/posts/fake-openai-repository-on-hugging-face-delivers-rust-based-infostealer/</link><pubDate>Sun, 10 May 2026 05:10:54 +0000</pubDate><guid>https://gridthegrey.com/posts/fake-openai-repository-on-hugging-face-delivers-rust-based-infostealer/</guid><category>Threat Level: HIGH</category><category>Supply Chain</category><category>Industry News</category><category>LLM Security</category><category>AML.T0010 - ML Supply Chain Compromise</category><category>AML.T0019 - Publish Poisoned Datasets</category><category>AML.T0047 - ML-Enabled Product or Service</category><description>A malicious Hugging Face repository impersonating OpenAI's 'Privacy Filter' project reached #1 on the platform's trending list and accumulated 244,000 downloads before removal, delivering a multi-stage infostealer to Windows users. The attack chain used a disguised Python loader to execute PowerShell commands, ultimately deploying a Rust-based payload capable of harvesting browser credentials, crypto wallets, SSH/VPN configs, and screenshots. The campaign highlights the growing risk of AI/ML supply chain attacks through trusted model-sharing platforms.</description></item><item><title>ClaudeBleed Flaw Lets Rogue Chrome Extensions Hijack AI Agent</title><link>https://gridthegrey.com/posts/claudebleed-flaw-lets-rogue-chrome-extensions-hijack-ai-agent/</link><pubDate>Sat, 09 May 2026 04:08:41 +0000</pubDate><guid>https://gridthegrey.com/posts/claudebleed-flaw-lets-rogue-chrome-extensions-hijack-ai-agent/</guid><category>Threat Level: HIGH</category><category>LLM Security</category><category>Prompt Injection</category><category>Agentic AI</category><category>Research</category><category>AML.T0051 - LLM Prompt Injection</category><category>AML.T0057 - LLM Data Leakage</category><category>AML.T0047 - ML-Enabled Product or Service</category><category>AML.T0043 - Craft Adversarial Data</category><description>A vulnerability dubbed ClaudeBleed in Anthropic's Claude Chrome extension allows any browser extension to inject arbitrary prompts into the Claude AI agent by exploiting lax permission checks and improper trust validation. Attackers can bypass user confirmation protections via DOM manipulation and repeated message forging, enabling full agent takeover for information theft or unauthorized actions. The flaw effectively breaks Chrome's extension security model and exposes users running Claude's agentic capabilities to third-party extension compromise.</description></item></channel></rss>