<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>GRID THE GREY — AI Threat Intelligence | GRID THE GREY</title><link>https://gridthegrey.com/</link><description>Real-time AI security intelligence — adversarial ML, LLM vulnerabilities, and supply chain threats mapped to MITRE ATLAS and OWASP LLM Top 10.</description><generator>Hugo</generator><language>en-us</language><copyright/><lastBuildDate>Tue, 14 Apr 2026 22:22:17 +0530</lastBuildDate><atom:link href="https://gridthegrey.com/index.xml" rel="self" type="application/rss+xml"/><item><title>How Hackers Are Thinking About AI</title><link>https://gridthegrey.com/posts/how-hackers-are-thinking-about-ai/</link><pubDate>Tue, 14 Apr 2026 16:52:07 +0000</pubDate><guid>https://gridthegrey.com/posts/how-hackers-are-thinking-about-ai/</guid><category>Threat Level: MEDIUM</category><category>Research</category><category>LLM Security</category><category>Industry News</category><category>AML.T0047 - ML-Enabled Product or Service</category><category>AML.T0054 - LLM Jailbreak</category><category>AML.T0051 - LLM Prompt Injection</category><category>AML.T0043 - Craft Adversarial Data</category><description>A new academic paper analysed over 160 cybercrime forum conversations to understand how threat actors are discussing and adopting AI tools for criminal purposes. The research documents both misuse of legitimate AI platforms and attempts to build bespoke criminal AI models, revealing early-stage diffusion of AI capabilities within cybercriminal communities. The findings carry practical implications for law enforcement and security practitioners monitoring the evolving AI-enabled threat landscape.</description></item><item><title>Your MTTD Looks Great. Your Post-Alert Gap Doesn't</title><link>https://gridthegrey.com/posts/your-mttd-looks-great-your-post-alert-gap-doesn-t/</link><pubDate>Tue, 14 Apr 2026 09:40:03 +0000</pubDate><guid>https://gridthegrey.com/posts/your-mttd-looks-great-your-post-alert-gap-doesn-t/</guid><category>Threat Level: HIGH</category><category>Agentic AI</category><category>LLM Security</category><category>Industry News</category><category>Research</category><category>AML.T0047 - ML-Enabled Product or Service</category><category>AML.T0040 - ML Model Inference API Access</category><category>AML.T0044 - Full ML Model Access</category><description>The article highlights a critical operational gap in SOC environments where AI-accelerated adversarial capabilities — including an Anthropic model restricted after autonomously exploiting zero-day vulnerabilities — are outpacing defender response workflows. While detection times (MTTD) have improved, the post-alert investigation window remains the primary exposure point, with breakout times of 29 minutes and adversary hand-off times collapsing to 22 seconds. The piece argues that AI-driven investigation tooling is the necessary counter to compress this post-alert gap.</description></item><item><title>CSA: CISOs Should Prepare for Post-Mythos Exploit Storm</title><link>https://gridthegrey.com/posts/csa-cisos-should-prepare-for-post-mythos-exploit-storm/</link><pubDate>Tue, 14 Apr 2026 08:19:18 +0000</pubDate><guid>https://gridthegrey.com/posts/csa-cisos-should-prepare-for-post-mythos-exploit-storm/</guid><category>Threat Level: HIGH</category><category>LLM Security</category><category>Jailbreaks</category><category>Prompt Injection</category><category>Agentic AI</category><category>Industry News</category><category>Research</category><category>Regulatory</category><category>AML.T0051 - LLM Prompt Injection</category><category>AML.T0054 - LLM Jailbreak</category><category>AML.T0056 - LLM Meta Prompt Extraction</category><category>AML.T0057 - LLM Data Leakage</category><category>AML.T0047 - ML-Enabled Product or Service</category><category>AML.T0040 - ML Model Inference API Access</category><description>The Cloud Security Alliance has issued a warning about an anticipated 'AI vulnerability storm' following the release of Anthropic's Claude Mythos model, urging CISOs to prepare defensive postures in advance of expected exploit activity. The advisory signals growing institutional concern that major LLM releases create systemic risk windows as adversaries probe new model capabilities and attack surfaces. Security leaders are being advised to treat post-release periods of frontier AI models as high-alert intervals requiring elevated monitoring and response readiness.</description></item><item><title>OWASP GenAI Security Project Gets Update, New Tools Matrix</title><link>https://gridthegrey.com/posts/owasp-genai-security-project-gets-update-new-tools-matrix/</link><pubDate>Tue, 14 Apr 2026 08:18:19 +0000</pubDate><guid>https://gridthegrey.com/posts/owasp-genai-security-project-gets-update-new-tools-matrix/</guid><category>Threat Level: MEDIUM</category><category>LLM Security</category><category>Agentic AI</category><category>Regulatory</category><category>Industry News</category><category>Research</category><category>AML.T0051 - LLM Prompt Injection</category><category>AML.T0054 - LLM Jailbreak</category><category>AML.T0057 - LLM Data Leakage</category><category>AML.T0047 - ML-Enabled Product or Service</category><category>AML.T0056 - LLM Meta Prompt Extraction</category><category>AML.T0010 - ML Supply Chain Compromise</category><description>OWASP has updated its GenAI Security Project to formally recognise 21 generative AI risks, releasing a new tools matrix to help organisations structure their defences. The update notably distinguishes between securing traditional GenAI systems and the emerging attack surface presented by agentic AI architectures. This guidance represents a significant standards-level acknowledgement that agentic AI requires its own dedicated security posture.</description></item><item><title>OpenAI Impacted by North Korea-Linked Axios Supply Chain Hack</title><link>https://gridthegrey.com/posts/openai-impacted-by-north-korea-linked-axios-supply-chain-hack/</link><pubDate>Tue, 14 Apr 2026 07:39:02 +0000</pubDate><guid>https://gridthegrey.com/posts/openai-impacted-by-north-korea-linked-axios-supply-chain-hack/</guid><category>Threat Level: HIGH</category><category>Supply Chain</category><category>Industry News</category><category>LLM Security</category><category>AML.T0010 - ML Supply Chain Compromise</category><category>AML.T0047 - ML-Enabled Product or Service</category><description>OpenAI has been impacted by a supply chain attack attributed to North Korea-linked threat actors, involving a compromised macOS code signing certificate associated with the Axios JavaScript library. The incident highlights the vulnerability of major AI platforms to upstream software supply chain compromises, which could expose users to malicious code distributed through trusted tooling. As a leading AI infrastructure provider, any compromise of OpenAI's build or distribution pipeline carries significant downstream risk for enterprises relying on its models and APIs.</description></item><item><title>Python Supply-Chain Compromise</title><link>https://gridthegrey.com/posts/python-supply-chain-compromise/</link><pubDate>Mon, 13 Apr 2026 15:41:27 +0000</pubDate><guid>https://gridthegrey.com/posts/python-supply-chain-compromise/</guid><category>Threat Level: HIGH</category><category>Supply Chain</category><category>LLM Security</category><category>Industry News</category><category>AML.T0010 - ML Supply Chain Compromise</category><category>AML.T0018 - Backdoor ML Model</category><category>AML.T0047 - ML-Enabled Product or Service</category><description>A malicious supply chain attack was discovered in litellm version 1.82.8, a widely-used Python library that serves as a unified interface for interacting with large language model APIs. The compromised package contained a hidden .pth file executing arbitrary code on every Python interpreter startup, meaning any developer or AI system relying on litellm could be silently compromised without triggering an explicit import. Given litellm's central role in LLM-powered application stacks, this attack vector poses significant risk to AI pipeline integrity, credential theft, and downstream model infrastructure.</description></item><item><title>Over 1,000 Exposed ComfyUI Instances Targeted in Cryptomining Botnet Campaign</title><link>https://gridthegrey.com/posts/over-1000-exposed-comfyui-instances-targeted-in-cryptomining-botnet-campaign/</link><pubDate>Mon, 13 Apr 2026 14:44:56 +0000</pubDate><guid>https://gridthegrey.com/posts/over-1000-exposed-comfyui-instances-targeted-in-cryptomining-botnet-campaign/</guid><category>Threat Level: HIGH</category><category>Supply Chain</category><category>Industry News</category><category>LLM Security</category><category>AML.T0010 - ML Supply Chain Compromise</category><category>AML.T0047 - ML-Enabled Product or Service</category><category>AML.T0040 - ML Model Inference API Access</category><description>Threat actors are actively exploiting internet-exposed ComfyUI instances — a popular AI image generation platform — by abusing its custom node execution feature to achieve unauthenticated remote code execution. Over 1,000 publicly accessible instances have been identified as targets, with compromised hosts enrolled in Monero and Conflux cryptomining operations and a Hysteria V2 proxy botnet. The attack highlights critical supply chain and insecure plugin design risks inherent in AI/ML tooling ecosystems.</description></item><item><title>Google's Vertex AI Is Over-Privileged. That's a Problem</title><link>https://gridthegrey.com/posts/google-s-vertex-ai-is-over-privileged-that-s-a-problem/</link><pubDate>Mon, 13 Apr 2026 14:39:34 +0000</pubDate><guid>https://gridthegrey.com/posts/google-s-vertex-ai-is-over-privileged-that-s-a-problem/</guid><category>Threat Level: HIGH</category><category>Agentic AI</category><category>LLM Security</category><category>Research</category><category>Industry News</category><category>AML.T0051 - LLM Prompt Injection</category><category>AML.T0057 - LLM Data Leakage</category><category>AML.T0040 - ML Model Inference API Access</category><category>AML.T0047 - ML-Enabled Product or Service</category><category>AML.T0012 - Valid Accounts</category><description>Palo Alto Networks researchers have identified over-privilege vulnerabilities in Google's Vertex AI platform, demonstrating how malicious actors could exploit AI agents to exfiltrate sensitive data and pivot into restricted cloud infrastructure. The findings highlight systemic risks in agentic AI deployments where excessive permissions granted to AI workloads expand the attack surface beyond traditional cloud security boundaries. This research underscores the growing urgency around securing AI agent permissions and enforcing least-privilege principles in enterprise ML platforms.</description></item><item><title>Flowise AI Agent Builder Under Active CVSS 10.0 RCE Exploitation; 12,000+ Instances Exposed</title><link>https://gridthegrey.com/posts/flowise-ai-agent-builder-under-active-cvss-10-0-rce-exploitation-12000-instances/</link><pubDate>Mon, 13 Apr 2026 14:19:20 +0000</pubDate><guid>https://gridthegrey.com/posts/flowise-ai-agent-builder-under-active-cvss-10-0-rce-exploitation-12000-instances/</guid><category>Threat Level: CRITICAL</category><category>Agentic AI</category><category>LLM Security</category><category>Supply Chain</category><category>Industry News</category><category>AML.T0047 - ML-Enabled Product or Service</category><category>AML.T0040 - ML Model Inference API Access</category><category>AML.T0010 - ML Supply Chain Compromise</category><description>A maximum-severity (CVSS 10.0) remote code execution vulnerability in Flowise, a widely-used open-source AI agent builder, is under active exploitation with over 12,000 internet-exposed instances at risk. The flaw, CVE-2025-59528, exists in the CustomMCP node and allows unauthenticated JavaScript execution with full Node.js runtime privileges via unsanitised MCP server configuration input. This marks the third Flowise vulnerability exploited in the wild, underscoring systemic security gaps in AI orchestration and agent-building platforms.</description></item><item><title>How We Broke Top AI Agent Benchmarks: And What Comes Next</title><link>https://gridthegrey.com/posts/how-we-broke-top-ai-agent-benchmarks-and-what-comes-next/</link><pubDate>Sat, 11 Apr 2026 19:15:56 +0000</pubDate><guid>https://gridthegrey.com/posts/how-we-broke-top-ai-agent-benchmarks-and-what-comes-next/</guid><category>Threat Level: CRITICAL</category><category>Agentic AI</category><category>Adversarial ML</category><category>Research</category><category>LLM Security</category><category>AML.T0043 - Craft Adversarial Data</category><category>AML.T0031 - Erode ML Model Integrity</category><category>AML.T0047 - ML-Enabled Product or Service</category><category>AML.T0051 - LLM Prompt Injection</category><category>AML.T0015 - Evade ML Model</category><description>Researchers at UC Berkeley demonstrated that every major AI agent benchmark — including SWE-bench, WebArena, OSWorld, and others — can be fully exploited to achieve near-perfect scores without solving a single task, using trivial environmental manipulation rather than genuine capability. The attacks include pytest hook injection, config file leakage, DOM manipulation, and reward component bypassing, with zero LLM calls required in most cases. This represents a systemic integrity failure in the evaluation infrastructure underpinning AI deployment decisions across industry and research.</description></item><item><title>Anthropic Claude Mythos Preview: The More Capable AI Becomes, the More Security It Needs</title><link>https://gridthegrey.com/posts/anthropic-claude-mythos-preview-the-more-capable-ai-becomes-the-more-security-it/</link><pubDate>Sat, 11 Apr 2026 09:21:26 +0000</pubDate><guid>https://gridthegrey.com/posts/anthropic-claude-mythos-preview-the-more-capable-ai-becomes-the-more-security-it/</guid><category>Threat Level: LOW</category><category>LLM Security</category><category>Agentic AI</category><category>Industry News</category><category>Regulatory</category><category>AML.T0047 - ML-Enabled Product or Service</category><category>AML.T0051 - LLM Prompt Injection</category><category>AML.T0040 - ML Model Inference API Access</category><description>CrowdStrike, as a founding member of Anthropic's Mythos program, is highlighting the security challenges posed by increasingly capable frontier AI models, signaling a growing industry focus on securing agentic and large-scale AI systems. The article underscores the philosophical and practical position that AI capability gains must be matched by proportional security investment. While the piece is primarily a vendor partnership announcement and executive viewpoint, it reflects an important industry trend toward formalising AI-specific security frameworks and tooling.</description></item><item><title>US summons bank bosses over cyber risks from Anthropic's latest AI model</title><link>https://gridthegrey.com/posts/us-summons-bank-bosses-over-cyber-risks-from-anthropic-s-latest-ai-model/</link><pubDate>Fri, 10 Apr 2026 13:47:17 +0000</pubDate><guid>https://gridthegrey.com/posts/us-summons-bank-bosses-over-cyber-risks-from-anthropic-s-latest-ai-model/</guid><category>Threat Level: CRITICAL</category><category>LLM Security</category><category>Agentic AI</category><category>Regulatory</category><category>Industry News</category><category>Research</category><category>AML.T0047 - ML-Enabled Product or Service</category><category>AML.T0044 - Full ML Model Access</category><category>AML.T0040 - ML Model Inference API Access</category><category>AML.T0010 - ML Supply Chain Compromise</category><description>The US Treasury convened major bank executives to discuss cybersecurity risks posed by Anthropic's unreleased Claude Mythos model, which the company claims has surpassed nearly all human experts at finding and exploiting software vulnerabilities. A code leak prompted Anthropic to publicly acknowledge the model's unprecedented offensive cyber capability, raising systemic financial sector risk concerns. The meeting signals growing regulatory awareness of AI-enabled cyber threats to critical financial infrastructure.</description></item><item><title>Can Anthropic Keep Its Exploit-Writing AI Out of the Wrong Hands?</title><link>https://gridthegrey.com/posts/can-anthropic-keep-its-exploit-writing-ai-out-of-the-wrong-hands/</link><pubDate>Fri, 10 Apr 2026 13:00:00 +0000</pubDate><guid>https://gridthegrey.com/posts/can-anthropic-keep-its-exploit-writing-ai-out-of-the-wrong-hands/</guid><category>Threat Level: HIGH</category><category>LLM Security</category><category>Agentic AI</category><category>Research</category><category>Industry News</category><category>Regulatory</category><category>AML.T0047 - ML-Enabled Product or Service</category><category>AML.T0054 - LLM Jailbreak</category><category>AML.T0044 - Full ML Model Access</category><category>AML.T0051 - LLM Prompt Injection</category><category>AML.T0040 - ML Model Inference API Access</category><description>Anthropic has released a preview of 'Mythos,' an AI model reportedly capable of autonomously discovering and exploiting critical zero-day vulnerabilities, raising significant dual-use concerns. While Anthropic claims the model ships with access controls, the security community is scrutinising whether those safeguards are sufficient to prevent misuse by malicious actors. The development represents a pivotal moment in the arms race between offensive AI capabilities and defensive governance frameworks.</description></item><item><title>Browser Extensions Are the New AI Consumption Channel That No One Is Talking About</title><link>https://gridthegrey.com/posts/browser-extensions-are-the-new-ai-consumption-channel-that-no-one-is-talking/</link><pubDate>Fri, 10 Apr 2026 11:00:00 +0000</pubDate><guid>https://gridthegrey.com/posts/browser-extensions-are-the-new-ai-consumption-channel-that-no-one-is-talking/</guid><category>Threat Level: HIGH</category><category>LLM Security</category><category>Supply Chain</category><category>Agentic AI</category><category>Industry News</category><category>AML.T0057 - LLM Data Leakage</category><category>AML.T0047 - ML-Enabled Product or Service</category><category>AML.T0010 - ML Supply Chain Compromise</category><category>AML.T0051 - LLM Prompt Injection</category><category>AML.T0040 - ML Model Inference API Access</category><description>A LayerX report reveals that AI browser extensions represent a largely unmonitored attack surface in enterprise environments, with 1-in-6 enterprise users already running at least one AI extension. These extensions are statistically riskier than standard extensions — 60% more likely to carry a CVE, 3x more likely to access cookies, and capable of exfiltrating sensitive data without triggering DLP or SaaS monitoring controls. The finding highlights a critical governance gap in AI consumption channels that bypasses traditional enterprise security tooling.</description></item><item><title>Process Manager for Autonomous AI Agents</title><link>https://gridthegrey.com/posts/process-manager-for-autonomous-ai-agents/</link><pubDate>Thu, 09 Apr 2026 06:00:55 +0000</pubDate><guid>https://gridthegrey.com/posts/process-manager-for-autonomous-ai-agents/</guid><category>Threat Level: HIGH</category><category>Agentic AI</category><category>LLM Security</category><category>Supply Chain</category><category>Prompt Injection</category><category>AML.T0051 - LLM Prompt Injection</category><category>AML.T0047 - ML-Enabled Product or Service</category><category>AML.T0010 - ML Supply Chain Compromise</category><category>AML.T0057 - LLM Data Leakage</category><category>AML.T0040 - ML Model Inference API Access</category><description>botctl is an open-source process manager that enables persistent, autonomous AI agents (currently Claude-backed) to run continuously as background daemons with tool access, file system write permissions, and internet connectivity. While marketed as a productivity tool, the architecture introduces substantial attack surface through unattended agentic execution, a skills marketplace with third-party prompt injection, and a locally-exposed web dashboard. The combination of persistent autonomy, extensible skill modules from arbitrary GitHub repositories, and session memory creates compounding risk vectors relevant to agentic AI security.</description></item><item><title>AI-Assisted Supply Chain Attack Targets GitHub</title><link>https://gridthegrey.com/posts/2026-04-13-ai-assisted-supply-chain-attack-targets-github/</link><pubDate>Mon, 06 Apr 2026 21:38:53 +0000</pubDate><guid>https://gridthegrey.com/posts/2026-04-13-ai-assisted-supply-chain-attack-targets-github/</guid><category>Threat Level: HIGH</category><category>Supply Chain</category><category>Agentic AI</category><category>Industry News</category><category>AML.T0010 - ML Supply Chain Compromise</category><category>AML.T0047 - ML-Enabled Product or Service</category><description>A threat actor identified as part of the PRT-scan campaign has leveraged AI-assisted automation to systematically target a widespread GitHub misconfiguration, marking the second such campaign in recent months. The use of AI for automated reconnaissance and exploitation of supply chain vulnerabilities represents a significant escalation in attacker capability. This campaign highlights the growing risk of AI-augmented attacks against software supply chains, which can have cascading downstream effects on ML pipelines and production systems.</description></item><item><title>How Charlotte AI AgentWorks Fuels Security's Agentic Ecosystem</title><link>https://gridthegrey.com/posts/how-charlotte-ai-agentworks-fuels-security-s-agentic-ecosystem/</link><pubDate>Mon, 06 Apr 2026 16:52:49 +0000</pubDate><guid>https://gridthegrey.com/posts/how-charlotte-ai-agentworks-fuels-security-s-agentic-ecosystem/</guid><category>Threat Level: MEDIUM</category><category>Agentic AI</category><category>LLM Security</category><category>Industry News</category><category>AML.T0047 - ML-Enabled Product or Service</category><category>AML.T0051 - LLM Prompt Injection</category><category>AML.T0040 - ML Model Inference API Access</category><description>CrowdStrike's Charlotte AI AgentWorks introduces an agentic security ecosystem where autonomous AI agents collaborate to perform security operations tasks with reduced human intervention. The platform raises important considerations around excessive agency, trust boundaries between agents, and the attack surface introduced by interconnected AI systems in security-critical environments. As agentic SOC architectures proliferate, the security of the AI agents themselves becomes a primary concern.</description></item><item><title>New CrowdStrike Innovations Secure AI Agents and Govern Shadow AI Across Endpoints, SaaS, and Cloud</title><link>https://gridthegrey.com/posts/new-crowdstrike-innovations-secure-ai-agents-and-govern-shadow-ai-across-saas/</link><pubDate>Mon, 06 Apr 2026 16:52:49 +0000</pubDate><guid>https://gridthegrey.com/posts/new-crowdstrike-innovations-secure-ai-agents-and-govern-shadow-ai-across-saas/</guid><category>Threat Level: MEDIUM</category><category>Agentic AI</category><category>LLM Security</category><category>Industry News</category><category>Regulatory</category><category>AML.T0047 - ML-Enabled Product or Service</category><category>AML.T0051 - LLM Prompt Injection</category><category>AML.T0057 - LLM Data Leakage</category><category>AML.T0040 - ML Model Inference API Access</category><description>CrowdStrike has announced new platform innovations targeting the governance of Shadow AI and the security of AI agents across endpoints, SaaS, and cloud environments. The release highlights growing enterprise concerns around unmanaged AI tool proliferation and the attack surface introduced by autonomous AI agents. These developments reflect an industry-wide shift toward operationalising AI-specific security controls within existing SOC workflows.</description></item><item><title>Claude Source Code Leak Highlights Big Supply Chain Missteps</title><link>https://gridthegrey.com/posts/2026-04-13-claude-source-code-leak-highlights-big-supply-chain-missteps/</link><pubDate>Fri, 03 Apr 2026 13:00:00 +0000</pubDate><guid>https://gridthegrey.com/posts/2026-04-13-claude-source-code-leak-highlights-big-supply-chain-missteps/</guid><category>Threat Level: HIGH</category><category>Supply Chain</category><category>Model Theft</category><category>LLM Security</category><category>Industry News</category><category>AML.T0010 - ML Supply Chain Compromise</category><category>AML.T0044 - Full ML Model Access</category><category>AML.T0056 - LLM Meta Prompt Extraction</category><category>AML.T0018 - Backdoor ML Model</category><category>AML.T0031 - Erode ML Model Integrity</category><description>A reported source code leak affecting Claude, Anthropic's large language model, underscores systemic weaknesses in AI software supply chains and the absence of robust oversight mechanisms at critical development and distribution layers. The incident highlights how proprietary model code, training pipelines, and system prompts can become high-value targets for adversarial actors seeking to enable model theft, backdoor insertion, or competitive intelligence gathering. This event serves as a broader warning about treating AI development infrastructure with the same rigor applied to other critical systems.</description></item></channel></rss>