<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>GRID THE GREY — AI Threat Intelligence | GRID THE GREY</title><link>https://gridthegrey.com/</link><description>Real-time AI security intelligence — adversarial ML, LLM vulnerabilities, and supply chain threats mapped to MITRE ATLAS and OWASP LLM Top 10.</description><generator>Hugo</generator><language>en-us</language><copyright/><lastBuildDate>Thu, 16 Apr 2026 10:14:22 +0530</lastBuildDate><atom:link href="https://gridthegrey.com/index.xml" rel="self" type="application/rss+xml"/><item><title>Deterministic + Agentic AI: The Architecture Exposure Validation Requires</title><link>https://gridthegrey.com/posts/deterministic-agentic-ai-the-architecture-exposure-validation-requires/</link><pubDate>Thu, 16 Apr 2026 04:44:10 +0000</pubDate><guid>https://gridthegrey.com/posts/deterministic-agentic-ai-the-architecture-exposure-validation-requires/</guid><category>Threat Level: MEDIUM</category><category>Agentic AI</category><category>LLM Security</category><category>Industry News</category><category>Research</category><category>AML.T0047 - ML-Enabled Product or Service</category><category>AML.T0040 - ML Model Inference API Access</category><description>The article examines the architectural tension between fully agentic AI systems and deterministic validation frameworks in security testing contexts, arguing that unconstrained AI autonomy introduces repeatability and auditability risks. It highlights how probabilistic AI behaviour — while valuable for exploration — undermines the measurable, consistent outcomes required for enterprise security validation programs. The piece reflects a broader industry debate about governing AI agency in high-stakes operational environments.</description></item><item><title>‘By Design’ Flaw in MCP Could Enable Widespread AI Supply Chain Attacks</title><link>https://gridthegrey.com/posts/by-design-flaw-in-mcp-could-enable-widespread-ai-supply-chain-attacks/</link><pubDate>Thu, 16 Apr 2026 04:24:54 +0000</pubDate><guid>https://gridthegrey.com/posts/by-design-flaw-in-mcp-could-enable-widespread-ai-supply-chain-attacks/</guid><category>Threat Level: CRITICAL</category><category>Supply Chain</category><category>LLM Security</category><category>Prompt Injection</category><category>Agentic AI</category><category>Research</category><category>AML.T0010 - ML Supply Chain Compromise</category><category>AML.T0051 - LLM Prompt Injection</category><category>AML.T0047 - ML-Enabled Product or Service</category><category>AML.T0057 - LLM Data Leakage</category><category>AML.T0031 - Erode ML Model Integrity</category><description>A structural vulnerability in Anthropic's Model Context Protocol (MCP) allows unsanitized commands to be executed silently within AI environments, potentially enabling full system compromise. Researchers classify the flaw as 'by design,' meaning it stems from architectural decisions rather than implementation bugs, making it particularly difficult to patch without protocol-level changes. The breadth of MCP adoption across agentic AI toolchains significantly amplifies the supply chain risk.</description></item><item><title>Capsule Security Emerges From Stealth With $7 Million in Funding</title><link>https://gridthegrey.com/posts/capsule-security-emerges-from-stealth-with-7-million-in-funding/</link><pubDate>Thu, 16 Apr 2026 04:23:06 +0000</pubDate><guid>https://gridthegrey.com/posts/capsule-security-emerges-from-stealth-with-7-million-in-funding/</guid><category>Threat Level: MEDIUM</category><category>Agentic AI</category><category>LLM Security</category><category>Industry News</category><category>AML.T0051 - LLM Prompt Injection</category><category>AML.T0047 - ML-Enabled Product or Service</category><category>AML.T0057 - LLM Data Leakage</category><description>Capsule Security, an Israeli startup, has emerged from stealth with $7 million in seed funding focused on runtime security for AI agents, continuously monitoring their behaviour to detect and prevent unsafe or malicious actions. This positions the company within the rapidly growing agentic AI security space, where autonomous agents executing actions on behalf of users represent a significant and underexplored attack surface. The funding signals growing investor recognition of the risks posed by unmonitored AI agent behaviour, including prompt injection, excessive agency, and unintended tool use.</description></item><item><title>Does Gas Town 'steal' usage from users' LLM credits to improve itself?</title><link>https://gridthegrey.com/posts/does-gas-town-steal-usage-from-users-llm-credits-to-improve-itself/</link><pubDate>Thu, 16 Apr 2026 04:20:31 +0000</pubDate><guid>https://gridthegrey.com/posts/does-gas-town-steal-usage-from-users-llm-credits-to-improve-itself/</guid><category>Threat Level: HIGH</category><category>Supply Chain</category><category>Agentic AI</category><category>LLM Security</category><category>Industry News</category><category>AML.T0010 - ML Supply Chain Compromise</category><category>AML.T0012 - Valid Accounts</category><category>AML.T0047 - ML-Enabled Product or Service</category><category>AML.T0040 - ML Model Inference API Access</category><description>Gas Town, a developer tool with 14.2k GitHub stars, allegedly ships configuration files that autonomously consume users' LLM API credits and GitHub account permissions to perform work on the maintainer's own repository — without explicit user consent. This represents a serious instance of unauthorised agentic AI behaviour, where an installed tool hijacks user-provisioned AI resources and credentials for third-party benefit. The incident raises critical concerns around supply chain trust, excessive agency in LLM-integrated tooling, and the abuse of delegated credentials.</description></item><item><title>Microsoft, Salesforce Patch AI Agent Data Leak Flaws</title><link>https://gridthegrey.com/posts/microsoft-salesforce-patch-ai-agent-data-leak-flaws/</link><pubDate>Thu, 16 Apr 2026 04:19:34 +0000</pubDate><guid>https://gridthegrey.com/posts/microsoft-salesforce-patch-ai-agent-data-leak-flaws/</guid><category>Threat Level: HIGH</category><category>LLM Security</category><category>Prompt Injection</category><category>Agentic AI</category><category>Industry News</category><category>AML.T0051 - LLM Prompt Injection</category><category>AML.T0057 - LLM Data Leakage</category><category>AML.T0047 - ML-Enabled Product or Service</category><category>AML.T0056 - LLM Meta Prompt Extraction</category><description>Prompt injection vulnerabilities in Salesforce Agentforce and Microsoft Copilot were patched after researchers demonstrated that external attackers could exploit them to exfiltrate sensitive user data. The flaws highlight systemic risks in enterprise AI agent deployments, where insufficient input sanitisation allows malicious content to hijack agent behaviour. Both vendors have issued patches, but the incidents underscore the growing attack surface introduced by agentic AI systems operating with elevated privileges.</description></item><item><title>What Claude Code's Source Revealed About AI Engineering Culture</title><link>https://gridthegrey.com/posts/what-claude-code-s-source-revealed-about-ai-engineering-culture/</link><pubDate>Thu, 16 Apr 2026 04:18:34 +0000</pubDate><guid>https://gridthegrey.com/posts/what-claude-code-s-source-revealed-about-ai-engineering-culture/</guid><category>Threat Level: MEDIUM</category><category>Supply Chain</category><category>Agentic AI</category><category>Industry News</category><category>LLM Security</category><category>AML.T0010 - ML Supply Chain Compromise</category><category>AML.T0047 - ML-Enabled Product or Service</category><category>AML.T0044 - Full ML Model Access</category><description>A packaging error exposed 512,000 lines of Claude Code's source, revealing severe code quality issues including a 3,167-line monolithic function, undocumented API waste, and regex-based sentiment analysis in an LLM product — raising questions about the security posture of AI-generated codebases. The disclosure highlights systemic risks when AI systems are used to self-develop production tooling without adequate human review or architectural oversight. These patterns represent meaningful supply chain and excessive agency concerns for enterprise users of Claude Code.</description></item><item><title>OpenAI Launches GPT-5.4-Cyber with Expanded Access for Security Teams</title><link>https://gridthegrey.com/posts/openai-launches-gpt-5-4-cyber-with-expanded-access-for-security-teams/</link><pubDate>Wed, 15 Apr 2026 09:03:45 +0000</pubDate><guid>https://gridthegrey.com/posts/openai-launches-gpt-5-4-cyber-with-expanded-access-for-security-teams/</guid><category>Threat Level: HIGH</category><category>LLM Security</category><category>Jailbreaks</category><category>Agentic AI</category><category>Prompt Injection</category><category>Industry News</category><category>AML.T0054 - LLM Jailbreak</category><category>AML.T0051 - LLM Prompt Injection</category><category>AML.T0047 - ML-Enabled Product or Service</category><category>AML.T0040 - ML Model Inference API Access</category><category>AML.T0031 - Erode ML Model Integrity</category><description>OpenAI has launched GPT-5.4-Cyber, a cybersecurity-optimised model variant, alongside an expanded Trusted Access for Cyber (TAC) programme targeting authenticated defenders and security teams. While the initiative is framed as a defensive measure, the dual-use nature of a vulnerability-detection model introduces significant risk of adversarial inversion — where threat actors could exploit the same capabilities to discover and weaponise unpatched vulnerabilities at scale. OpenAI acknowledges this risk and states it is iteratively strengthening safeguards against jailbreaks and adversarial prompt injection as access broadens.</description></item><item><title>AI-Driven Pushpaganda Scam Exploits Google Discover to Spread Scareware and Ad Fraud</title><link>https://gridthegrey.com/posts/ai-driven-pushpaganda-scam-exploits-google-discover-to-spread-scareware-and-ad/</link><pubDate>Wed, 15 Apr 2026 05:56:39 +0000</pubDate><guid>https://gridthegrey.com/posts/ai-driven-pushpaganda-scam-exploits-google-discover-to-spread-scareware-and-ad/</guid><category>Threat Level: HIGH</category><category>Adversarial ML</category><category>Industry News</category><category>Research</category><category>AML.T0047 - ML-Enabled Product or Service</category><category>AML.T0043 - Craft Adversarial Data</category><category>AML.T0019 - Publish Poisoned Datasets</category><description>A large-scale ad fraud and scareware campaign dubbed 'Pushpaganda' has been uncovered exploiting Google Discover by using AI-generated content to poison search discovery surfaces and lure users into enabling malicious push notifications. At its peak the operation generated 240 million bid requests across 113 domains in a single week, demonstrating how AI-generated disinformation can be weaponised as an automated delivery mechanism for financial fraud. The campaign highlights the growing abuse of generative AI to scale deceptive content operations against trusted platform surfaces.</description></item><item><title>Scanning for AI Models, (Tue, Apr 14th)</title><link>https://gridthegrey.com/posts/scanning-for-ai-models-tue-apr-14th/</link><pubDate>Wed, 15 Apr 2026 05:39:27 +0000</pubDate><guid>https://gridthegrey.com/posts/scanning-for-ai-models-tue-apr-14th/</guid><category>Threat Level: HIGH</category><category>Model Theft</category><category>LLM Security</category><category>Industry News</category><category>Research</category><category>AML.T0012 - Valid Accounts</category><category>AML.T0040 - ML Model Inference API Access</category><category>AML.T0044 - Full ML Model Access</category><category>AML.T0010 - ML Supply Chain Compromise</category><description>A single threat actor (IP 81.168.83.103) has been systematically scanning internet-facing systems since at least January 2026, specifically targeting credential files, API tokens, and configuration data associated with popular AI platforms including OpenAI, Anthropic Claude, HuggingFace, and the Openclaw/Clawdbot tools. The campaign focuses on harvesting AI API credentials and secrets stored in predictable file paths, representing a targeted reconnaissance effort against AI model deployments. If successful, these probes could enable API key theft, model access abuse, and broader compromise of AI-integrated systems.</description></item><item><title>How Hackers Are Thinking About AI</title><link>https://gridthegrey.com/posts/how-hackers-are-thinking-about-ai/</link><pubDate>Tue, 14 Apr 2026 16:52:07 +0000</pubDate><guid>https://gridthegrey.com/posts/how-hackers-are-thinking-about-ai/</guid><category>Threat Level: MEDIUM</category><category>Research</category><category>LLM Security</category><category>Industry News</category><category>AML.T0047 - ML-Enabled Product or Service</category><category>AML.T0054 - LLM Jailbreak</category><category>AML.T0051 - LLM Prompt Injection</category><category>AML.T0043 - Craft Adversarial Data</category><description>A new academic paper analysed over 160 cybercrime forum conversations to understand how threat actors are discussing and adopting AI tools for criminal purposes. The research documents both misuse of legitimate AI platforms and attempts to build bespoke criminal AI models, revealing early-stage diffusion of AI capabilities within cybercriminal communities. The findings carry practical implications for law enforcement and security practitioners monitoring the evolving AI-enabled threat landscape.</description></item><item><title>Your MTTD Looks Great. Your Post-Alert Gap Doesn't</title><link>https://gridthegrey.com/posts/your-mttd-looks-great-your-post-alert-gap-doesn-t/</link><pubDate>Tue, 14 Apr 2026 09:40:03 +0000</pubDate><guid>https://gridthegrey.com/posts/your-mttd-looks-great-your-post-alert-gap-doesn-t/</guid><category>Threat Level: HIGH</category><category>Agentic AI</category><category>LLM Security</category><category>Industry News</category><category>Research</category><category>AML.T0047 - ML-Enabled Product or Service</category><category>AML.T0040 - ML Model Inference API Access</category><category>AML.T0044 - Full ML Model Access</category><description>The article highlights a critical operational gap in SOC environments where AI-accelerated adversarial capabilities — including an Anthropic model restricted after autonomously exploiting zero-day vulnerabilities — are outpacing defender response workflows. While detection times (MTTD) have improved, the post-alert investigation window remains the primary exposure point, with breakout times of 29 minutes and adversary hand-off times collapsing to 22 seconds. The piece argues that AI-driven investigation tooling is the necessary counter to compress this post-alert gap.</description></item><item><title>CSA: CISOs Should Prepare for Post-Mythos Exploit Storm</title><link>https://gridthegrey.com/posts/csa-cisos-should-prepare-for-post-mythos-exploit-storm/</link><pubDate>Tue, 14 Apr 2026 08:19:18 +0000</pubDate><guid>https://gridthegrey.com/posts/csa-cisos-should-prepare-for-post-mythos-exploit-storm/</guid><category>Threat Level: HIGH</category><category>LLM Security</category><category>Jailbreaks</category><category>Prompt Injection</category><category>Agentic AI</category><category>Industry News</category><category>Research</category><category>Regulatory</category><category>AML.T0051 - LLM Prompt Injection</category><category>AML.T0054 - LLM Jailbreak</category><category>AML.T0056 - LLM Meta Prompt Extraction</category><category>AML.T0057 - LLM Data Leakage</category><category>AML.T0047 - ML-Enabled Product or Service</category><category>AML.T0040 - ML Model Inference API Access</category><description>The Cloud Security Alliance has issued a warning about an anticipated 'AI vulnerability storm' following the release of Anthropic's Claude Mythos model, urging CISOs to prepare defensive postures in advance of expected exploit activity. The advisory signals growing institutional concern that major LLM releases create systemic risk windows as adversaries probe new model capabilities and attack surfaces. Security leaders are being advised to treat post-release periods of frontier AI models as high-alert intervals requiring elevated monitoring and response readiness.</description></item><item><title>OWASP GenAI Security Project Gets Update, New Tools Matrix</title><link>https://gridthegrey.com/posts/owasp-genai-security-project-gets-update-new-tools-matrix/</link><pubDate>Tue, 14 Apr 2026 08:18:19 +0000</pubDate><guid>https://gridthegrey.com/posts/owasp-genai-security-project-gets-update-new-tools-matrix/</guid><category>Threat Level: MEDIUM</category><category>LLM Security</category><category>Agentic AI</category><category>Regulatory</category><category>Industry News</category><category>Research</category><category>AML.T0051 - LLM Prompt Injection</category><category>AML.T0054 - LLM Jailbreak</category><category>AML.T0057 - LLM Data Leakage</category><category>AML.T0047 - ML-Enabled Product or Service</category><category>AML.T0056 - LLM Meta Prompt Extraction</category><category>AML.T0010 - ML Supply Chain Compromise</category><description>OWASP has updated its GenAI Security Project to formally recognise 21 generative AI risks, releasing a new tools matrix to help organisations structure their defences. The update notably distinguishes between securing traditional GenAI systems and the emerging attack surface presented by agentic AI architectures. This guidance represents a significant standards-level acknowledgement that agentic AI requires its own dedicated security posture.</description></item><item><title>OpenAI Impacted by North Korea-Linked Axios Supply Chain Hack</title><link>https://gridthegrey.com/posts/openai-impacted-by-north-korea-linked-axios-supply-chain-hack/</link><pubDate>Tue, 14 Apr 2026 07:39:02 +0000</pubDate><guid>https://gridthegrey.com/posts/openai-impacted-by-north-korea-linked-axios-supply-chain-hack/</guid><category>Threat Level: HIGH</category><category>Supply Chain</category><category>Industry News</category><category>LLM Security</category><category>AML.T0010 - ML Supply Chain Compromise</category><category>AML.T0047 - ML-Enabled Product or Service</category><description>OpenAI has been impacted by a supply chain attack attributed to North Korea-linked threat actors, involving a compromised macOS code signing certificate associated with the Axios JavaScript library. The incident highlights the vulnerability of major AI platforms to upstream software supply chain compromises, which could expose users to malicious code distributed through trusted tooling. As a leading AI infrastructure provider, any compromise of OpenAI's build or distribution pipeline carries significant downstream risk for enterprises relying on its models and APIs.</description></item><item><title>Python Supply-Chain Compromise</title><link>https://gridthegrey.com/posts/python-supply-chain-compromise/</link><pubDate>Mon, 13 Apr 2026 15:41:27 +0000</pubDate><guid>https://gridthegrey.com/posts/python-supply-chain-compromise/</guid><category>Threat Level: HIGH</category><category>Supply Chain</category><category>LLM Security</category><category>Industry News</category><category>AML.T0010 - ML Supply Chain Compromise</category><category>AML.T0018 - Backdoor ML Model</category><category>AML.T0047 - ML-Enabled Product or Service</category><description>A malicious supply chain attack was discovered in litellm version 1.82.8, a widely-used Python library that serves as a unified interface for interacting with large language model APIs. The compromised package contained a hidden .pth file executing arbitrary code on every Python interpreter startup, meaning any developer or AI system relying on litellm could be silently compromised without triggering an explicit import. Given litellm's central role in LLM-powered application stacks, this attack vector poses significant risk to AI pipeline integrity, credential theft, and downstream model infrastructure.</description></item><item><title>Over 1,000 Exposed ComfyUI Instances Targeted in Cryptomining Botnet Campaign</title><link>https://gridthegrey.com/posts/over-1000-exposed-comfyui-instances-targeted-in-cryptomining-botnet-campaign/</link><pubDate>Mon, 13 Apr 2026 14:44:56 +0000</pubDate><guid>https://gridthegrey.com/posts/over-1000-exposed-comfyui-instances-targeted-in-cryptomining-botnet-campaign/</guid><category>Threat Level: HIGH</category><category>Supply Chain</category><category>Industry News</category><category>LLM Security</category><category>AML.T0010 - ML Supply Chain Compromise</category><category>AML.T0047 - ML-Enabled Product or Service</category><category>AML.T0040 - ML Model Inference API Access</category><description>Threat actors are actively exploiting internet-exposed ComfyUI instances — a popular AI image generation platform — by abusing its custom node execution feature to achieve unauthenticated remote code execution. Over 1,000 publicly accessible instances have been identified as targets, with compromised hosts enrolled in Monero and Conflux cryptomining operations and a Hysteria V2 proxy botnet. The attack highlights critical supply chain and insecure plugin design risks inherent in AI/ML tooling ecosystems.</description></item><item><title>Google's Vertex AI Is Over-Privileged. That's a Problem</title><link>https://gridthegrey.com/posts/google-s-vertex-ai-is-over-privileged-that-s-a-problem/</link><pubDate>Mon, 13 Apr 2026 14:39:34 +0000</pubDate><guid>https://gridthegrey.com/posts/google-s-vertex-ai-is-over-privileged-that-s-a-problem/</guid><category>Threat Level: HIGH</category><category>Agentic AI</category><category>LLM Security</category><category>Research</category><category>Industry News</category><category>AML.T0051 - LLM Prompt Injection</category><category>AML.T0057 - LLM Data Leakage</category><category>AML.T0040 - ML Model Inference API Access</category><category>AML.T0047 - ML-Enabled Product or Service</category><category>AML.T0012 - Valid Accounts</category><description>Palo Alto Networks researchers have identified over-privilege vulnerabilities in Google's Vertex AI platform, demonstrating how malicious actors could exploit AI agents to exfiltrate sensitive data and pivot into restricted cloud infrastructure. The findings highlight systemic risks in agentic AI deployments where excessive permissions granted to AI workloads expand the attack surface beyond traditional cloud security boundaries. This research underscores the growing urgency around securing AI agent permissions and enforcing least-privilege principles in enterprise ML platforms.</description></item><item><title>Flowise AI Agent Builder Under Active CVSS 10.0 RCE Exploitation; 12,000+ Instances Exposed</title><link>https://gridthegrey.com/posts/flowise-ai-agent-builder-under-active-cvss-10-0-rce-exploitation-12000-instances/</link><pubDate>Mon, 13 Apr 2026 14:19:20 +0000</pubDate><guid>https://gridthegrey.com/posts/flowise-ai-agent-builder-under-active-cvss-10-0-rce-exploitation-12000-instances/</guid><category>Threat Level: CRITICAL</category><category>Agentic AI</category><category>LLM Security</category><category>Supply Chain</category><category>Industry News</category><category>AML.T0047 - ML-Enabled Product or Service</category><category>AML.T0040 - ML Model Inference API Access</category><category>AML.T0010 - ML Supply Chain Compromise</category><description>A maximum-severity (CVSS 10.0) remote code execution vulnerability in Flowise, a widely-used open-source AI agent builder, is under active exploitation with over 12,000 internet-exposed instances at risk. The flaw, CVE-2025-59528, exists in the CustomMCP node and allows unauthenticated JavaScript execution with full Node.js runtime privileges via unsanitised MCP server configuration input. This marks the third Flowise vulnerability exploited in the wild, underscoring systemic security gaps in AI orchestration and agent-building platforms.</description></item><item><title>How We Broke Top AI Agent Benchmarks: And What Comes Next</title><link>https://gridthegrey.com/posts/how-we-broke-top-ai-agent-benchmarks-and-what-comes-next/</link><pubDate>Sat, 11 Apr 2026 19:15:56 +0000</pubDate><guid>https://gridthegrey.com/posts/how-we-broke-top-ai-agent-benchmarks-and-what-comes-next/</guid><category>Threat Level: CRITICAL</category><category>Agentic AI</category><category>Adversarial ML</category><category>Research</category><category>LLM Security</category><category>AML.T0043 - Craft Adversarial Data</category><category>AML.T0031 - Erode ML Model Integrity</category><category>AML.T0047 - ML-Enabled Product or Service</category><category>AML.T0051 - LLM Prompt Injection</category><category>AML.T0015 - Evade ML Model</category><description>Researchers at UC Berkeley demonstrated that every major AI agent benchmark — including SWE-bench, WebArena, OSWorld, and others — can be fully exploited to achieve near-perfect scores without solving a single task, using trivial environmental manipulation rather than genuine capability. The attacks include pytest hook injection, config file leakage, DOM manipulation, and reward component bypassing, with zero LLM calls required in most cases. This represents a systemic integrity failure in the evaluation infrastructure underpinning AI deployment decisions across industry and research.</description></item><item><title>Anthropic Claude Mythos Preview: The More Capable AI Becomes, the More Security It Needs</title><link>https://gridthegrey.com/posts/anthropic-claude-mythos-preview-the-more-capable-ai-becomes-the-more-security-it/</link><pubDate>Sat, 11 Apr 2026 09:21:26 +0000</pubDate><guid>https://gridthegrey.com/posts/anthropic-claude-mythos-preview-the-more-capable-ai-becomes-the-more-security-it/</guid><category>Threat Level: LOW</category><category>LLM Security</category><category>Agentic AI</category><category>Industry News</category><category>Regulatory</category><category>AML.T0047 - ML-Enabled Product or Service</category><category>AML.T0051 - LLM Prompt Injection</category><category>AML.T0040 - ML Model Inference API Access</category><description>CrowdStrike, as a founding member of Anthropic's Mythos program, is highlighting the security challenges posed by increasingly capable frontier AI models, signaling a growing industry focus on securing agentic and large-scale AI systems. The article underscores the philosophical and practical position that AI capability gains must be matched by proportional security investment. While the piece is primarily a vendor partnership announcement and executive viewpoint, it reflects an important industry trend toward formalising AI-specific security frameworks and tooling.</description></item></channel></rss>