<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>GRID THE GREY — AI Threat Intelligence | GRID THE GREY</title><link>https://gridthegrey.com/</link><description>Real-time AI security intelligence — adversarial ML, LLM vulnerabilities, and supply chain threats mapped to MITRE ATLAS and OWASP LLM Top 10.</description><generator>Hugo</generator><language>en-us</language><copyright/><lastBuildDate>Fri, 17 Apr 2026 20:40:47 +0530</lastBuildDate><atom:link href="https://gridthegrey.com/index.xml" rel="self" type="application/rss+xml"/><item><title>Claude Code, Gemini CLI, GitHub Copilot Agents Vulnerable to Prompt Injection via Comments</title><link>https://gridthegrey.com/posts/claude-code-gemini-cli-github-copilot-agents-vulnerable-to-prompt-injection-via/</link><pubDate>Fri, 17 Apr 2026 03:41:16 +0000</pubDate><guid>https://gridthegrey.com/posts/claude-code-gemini-cli-github-copilot-agents-vulnerable-to-prompt-injection-via/</guid><category>Threat Level: HIGH</category><category>LLM Security</category><category>Prompt Injection</category><category>Agentic AI</category><category>Research</category><category>AML.T0051 - LLM Prompt Injection</category><category>AML.T0043 - Craft Adversarial Data</category><category>AML.T0047 - ML-Enabled Product or Service</category><category>AML.T0057 - LLM Data Leakage</category><description>A researcher has disclosed a novel prompt injection attack technique dubbed 'Comment and Control,' demonstrating that popular AI coding agents — including Claude Code, Gemini CLI, and GitHub Copilot Agents — can be manipulated through malicious instructions embedded in source code comments. The attack exploits the tendency of agentic coding tools to process and act upon contextual content within files they are tasked to read or modify. This represents a meaningful escalation in the risk surface of AI-assisted software development workflows.</description></item><item><title>OpenAI Widens Access to Cybersecurity Model After Anthropic’s Mythos Reveal</title><link>https://gridthegrey.com/posts/openai-widens-access-to-cybersecurity-model-after-anthropics-mythos-reveal/</link><pubDate>Fri, 17 Apr 2026 03:39:14 +0000</pubDate><guid>https://gridthegrey.com/posts/openai-widens-access-to-cybersecurity-model-after-anthropics-mythos-reveal/</guid><category>Threat Level: MEDIUM</category><category>LLM Security</category><category>Industry News</category><category>Research</category><category>Regulatory</category><category>AML.T0047 - ML-Enabled Product or Service</category><category>AML.T0054 - LLM Jailbreak</category><category>AML.T0040 - ML Model Inference API Access</category><description>OpenAI has expanded access to GPT-5.4-Cyber, a fine-tuned model designed for defensive cybersecurity applications, following Anthropic's reveal of its Mythos cybersecurity model. While framed as a defensive tool for legitimate security practitioners, the widened access to a capability-enhanced cybersecurity LLM raises dual-use concerns around potential misuse for offensive operations. The competitive dynamic between major AI labs in the security-focused model space signals a broader industry trend that warrants careful access control and policy scrutiny.</description></item><item><title>Human Trust of AI Agents</title><link>https://gridthegrey.com/posts/human-trust-of-ai-agents/</link><pubDate>Fri, 17 Apr 2026 03:37:49 +0000</pubDate><guid>https://gridthegrey.com/posts/human-trust-of-ai-agents/</guid><category>Threat Level: MEDIUM</category><category>Research</category><category>Agentic AI</category><category>LLM Security</category><category>AML.T0047 - ML-Enabled Product or Service</category><category>AML.T0043 - Craft Adversarial Data</category><description>Research published via Schneier on Security reveals that humans systematically over-trust LLMs in strategic game environments, defaulting to Nash-equilibrium rational play based on assumptions of LLM rationality and cooperation. This behavioural bias has direct security implications for mixed human-LLM systems, where adversaries could exploit predictable human over-trust to manipulate decision outcomes. The findings underscore systemic risks in deploying LLMs as agents in high-stakes economic or security-relevant decision loops.</description></item><item><title>Frontier AI for Defenders: CrowdStrike and OpenAI TAC</title><link>https://gridthegrey.com/posts/frontier-ai-for-defenders-crowdstrike-and-openai-tac/</link><pubDate>Fri, 17 Apr 2026 03:11:23 +0000</pubDate><guid>https://gridthegrey.com/posts/frontier-ai-for-defenders-crowdstrike-and-openai-tac/</guid><category>Threat Level: MEDIUM</category><category>LLM Security</category><category>Agentic AI</category><category>Industry News</category><category>AML.T0047 - ML-Enabled Product or Service</category><category>AML.T0051 - LLM Prompt Injection</category><category>AML.T0057 - LLM Data Leakage</category><category>AML.T0040 - ML Model Inference API Access</category><description>CrowdStrike has announced a partnership with OpenAI's Threat Actor Collaboration (TAC) programme, positioning frontier AI models as defensive tools within the cybersecurity operations space. The collaboration signals a broader industry push to deploy advanced LLMs in security contexts, raising important considerations around agentic AI risk, model trust boundaries, and the dual-use nature of frontier AI capabilities. While framed as a defensive initiative, the integration of powerful AI into SOC workflows introduces new attack surfaces including prompt injection against agentic pipelines and potential for sensitive data leakage through LLM interfaces.</description></item><item><title>Deterministic + Agentic AI: The Architecture Exposure Validation Requires</title><link>https://gridthegrey.com/posts/deterministic-agentic-ai-the-architecture-exposure-validation-requires/</link><pubDate>Thu, 16 Apr 2026 04:44:10 +0000</pubDate><guid>https://gridthegrey.com/posts/deterministic-agentic-ai-the-architecture-exposure-validation-requires/</guid><category>Threat Level: MEDIUM</category><category>Agentic AI</category><category>LLM Security</category><category>Industry News</category><category>Research</category><category>AML.T0047 - ML-Enabled Product or Service</category><category>AML.T0040 - ML Model Inference API Access</category><description>The article examines the architectural tension between fully agentic AI systems and deterministic validation frameworks in security testing contexts, arguing that unconstrained AI autonomy introduces repeatability and auditability risks. It highlights how probabilistic AI behaviour — while valuable for exploration — undermines the measurable, consistent outcomes required for enterprise security validation programs. The piece reflects a broader industry debate about governing AI agency in high-stakes operational environments.</description></item><item><title>‘By Design’ Flaw in MCP Could Enable Widespread AI Supply Chain Attacks</title><link>https://gridthegrey.com/posts/by-design-flaw-in-mcp-could-enable-widespread-ai-supply-chain-attacks/</link><pubDate>Thu, 16 Apr 2026 04:24:54 +0000</pubDate><guid>https://gridthegrey.com/posts/by-design-flaw-in-mcp-could-enable-widespread-ai-supply-chain-attacks/</guid><category>Threat Level: CRITICAL</category><category>Supply Chain</category><category>LLM Security</category><category>Prompt Injection</category><category>Agentic AI</category><category>Research</category><category>AML.T0010 - ML Supply Chain Compromise</category><category>AML.T0051 - LLM Prompt Injection</category><category>AML.T0047 - ML-Enabled Product or Service</category><category>AML.T0057 - LLM Data Leakage</category><category>AML.T0031 - Erode ML Model Integrity</category><description>A structural vulnerability in Anthropic's Model Context Protocol (MCP) allows unsanitized commands to be executed silently within AI environments, potentially enabling full system compromise. Researchers classify the flaw as 'by design,' meaning it stems from architectural decisions rather than implementation bugs, making it particularly difficult to patch without protocol-level changes. The breadth of MCP adoption across agentic AI toolchains significantly amplifies the supply chain risk.</description></item><item><title>Capsule Security Emerges From Stealth With $7 Million in Funding</title><link>https://gridthegrey.com/posts/capsule-security-emerges-from-stealth-with-7-million-in-funding/</link><pubDate>Thu, 16 Apr 2026 04:23:06 +0000</pubDate><guid>https://gridthegrey.com/posts/capsule-security-emerges-from-stealth-with-7-million-in-funding/</guid><category>Threat Level: MEDIUM</category><category>Agentic AI</category><category>LLM Security</category><category>Industry News</category><category>AML.T0051 - LLM Prompt Injection</category><category>AML.T0047 - ML-Enabled Product or Service</category><category>AML.T0057 - LLM Data Leakage</category><description>Capsule Security, an Israeli startup, has emerged from stealth with $7 million in seed funding focused on runtime security for AI agents, continuously monitoring their behaviour to detect and prevent unsafe or malicious actions. This positions the company within the rapidly growing agentic AI security space, where autonomous agents executing actions on behalf of users represent a significant and underexplored attack surface. The funding signals growing investor recognition of the risks posed by unmonitored AI agent behaviour, including prompt injection, excessive agency, and unintended tool use.</description></item><item><title>Does Gas Town 'steal' usage from users' LLM credits to improve itself?</title><link>https://gridthegrey.com/posts/does-gas-town-steal-usage-from-users-llm-credits-to-improve-itself/</link><pubDate>Thu, 16 Apr 2026 04:20:31 +0000</pubDate><guid>https://gridthegrey.com/posts/does-gas-town-steal-usage-from-users-llm-credits-to-improve-itself/</guid><category>Threat Level: HIGH</category><category>Supply Chain</category><category>Agentic AI</category><category>LLM Security</category><category>Industry News</category><category>AML.T0010 - ML Supply Chain Compromise</category><category>AML.T0012 - Valid Accounts</category><category>AML.T0047 - ML-Enabled Product or Service</category><category>AML.T0040 - ML Model Inference API Access</category><description>Gas Town, a developer tool with 14.2k GitHub stars, allegedly ships configuration files that autonomously consume users' LLM API credits and GitHub account permissions to perform work on the maintainer's own repository — without explicit user consent. This represents a serious instance of unauthorised agentic AI behaviour, where an installed tool hijacks user-provisioned AI resources and credentials for third-party benefit. The incident raises critical concerns around supply chain trust, excessive agency in LLM-integrated tooling, and the abuse of delegated credentials.</description></item><item><title>Microsoft, Salesforce Patch AI Agent Data Leak Flaws</title><link>https://gridthegrey.com/posts/microsoft-salesforce-patch-ai-agent-data-leak-flaws/</link><pubDate>Thu, 16 Apr 2026 04:19:34 +0000</pubDate><guid>https://gridthegrey.com/posts/microsoft-salesforce-patch-ai-agent-data-leak-flaws/</guid><category>Threat Level: HIGH</category><category>LLM Security</category><category>Prompt Injection</category><category>Agentic AI</category><category>Industry News</category><category>AML.T0051 - LLM Prompt Injection</category><category>AML.T0057 - LLM Data Leakage</category><category>AML.T0047 - ML-Enabled Product or Service</category><category>AML.T0056 - LLM Meta Prompt Extraction</category><description>Prompt injection vulnerabilities in Salesforce Agentforce and Microsoft Copilot were patched after researchers demonstrated that external attackers could exploit them to exfiltrate sensitive user data. The flaws highlight systemic risks in enterprise AI agent deployments, where insufficient input sanitisation allows malicious content to hijack agent behaviour. Both vendors have issued patches, but the incidents underscore the growing attack surface introduced by agentic AI systems operating with elevated privileges.</description></item><item><title>What Claude Code's Source Revealed About AI Engineering Culture</title><link>https://gridthegrey.com/posts/what-claude-code-s-source-revealed-about-ai-engineering-culture/</link><pubDate>Thu, 16 Apr 2026 04:18:34 +0000</pubDate><guid>https://gridthegrey.com/posts/what-claude-code-s-source-revealed-about-ai-engineering-culture/</guid><category>Threat Level: MEDIUM</category><category>Supply Chain</category><category>Agentic AI</category><category>Industry News</category><category>LLM Security</category><category>AML.T0010 - ML Supply Chain Compromise</category><category>AML.T0047 - ML-Enabled Product or Service</category><category>AML.T0044 - Full ML Model Access</category><description>A packaging error exposed 512,000 lines of Claude Code's source, revealing severe code quality issues including a 3,167-line monolithic function, undocumented API waste, and regex-based sentiment analysis in an LLM product — raising questions about the security posture of AI-generated codebases. The disclosure highlights systemic risks when AI systems are used to self-develop production tooling without adequate human review or architectural oversight. These patterns represent meaningful supply chain and excessive agency concerns for enterprise users of Claude Code.</description></item><item><title>OpenAI Launches GPT-5.4-Cyber with Expanded Access for Security Teams</title><link>https://gridthegrey.com/posts/openai-launches-gpt-5-4-cyber-with-expanded-access-for-security-teams/</link><pubDate>Wed, 15 Apr 2026 09:03:45 +0000</pubDate><guid>https://gridthegrey.com/posts/openai-launches-gpt-5-4-cyber-with-expanded-access-for-security-teams/</guid><category>Threat Level: HIGH</category><category>LLM Security</category><category>Jailbreaks</category><category>Agentic AI</category><category>Prompt Injection</category><category>Industry News</category><category>AML.T0054 - LLM Jailbreak</category><category>AML.T0051 - LLM Prompt Injection</category><category>AML.T0047 - ML-Enabled Product or Service</category><category>AML.T0040 - ML Model Inference API Access</category><category>AML.T0031 - Erode ML Model Integrity</category><description>OpenAI has launched GPT-5.4-Cyber, a cybersecurity-optimised model variant, alongside an expanded Trusted Access for Cyber (TAC) programme targeting authenticated defenders and security teams. While the initiative is framed as a defensive measure, the dual-use nature of a vulnerability-detection model introduces significant risk of adversarial inversion — where threat actors could exploit the same capabilities to discover and weaponise unpatched vulnerabilities at scale. OpenAI acknowledges this risk and states it is iteratively strengthening safeguards against jailbreaks and adversarial prompt injection as access broadens.</description></item><item><title>AI-Driven Pushpaganda Scam Exploits Google Discover to Spread Scareware and Ad Fraud</title><link>https://gridthegrey.com/posts/ai-driven-pushpaganda-scam-exploits-google-discover-to-spread-scareware-and-ad/</link><pubDate>Wed, 15 Apr 2026 05:56:39 +0000</pubDate><guid>https://gridthegrey.com/posts/ai-driven-pushpaganda-scam-exploits-google-discover-to-spread-scareware-and-ad/</guid><category>Threat Level: HIGH</category><category>Adversarial ML</category><category>Industry News</category><category>Research</category><category>AML.T0047 - ML-Enabled Product or Service</category><category>AML.T0043 - Craft Adversarial Data</category><category>AML.T0019 - Publish Poisoned Datasets</category><description>A large-scale ad fraud and scareware campaign dubbed 'Pushpaganda' has been uncovered exploiting Google Discover by using AI-generated content to poison search discovery surfaces and lure users into enabling malicious push notifications. At its peak the operation generated 240 million bid requests across 113 domains in a single week, demonstrating how AI-generated disinformation can be weaponised as an automated delivery mechanism for financial fraud. The campaign highlights the growing abuse of generative AI to scale deceptive content operations against trusted platform surfaces.</description></item><item><title>Scanning for AI Models, (Tue, Apr 14th)</title><link>https://gridthegrey.com/posts/scanning-for-ai-models-tue-apr-14th/</link><pubDate>Wed, 15 Apr 2026 05:39:27 +0000</pubDate><guid>https://gridthegrey.com/posts/scanning-for-ai-models-tue-apr-14th/</guid><category>Threat Level: HIGH</category><category>Model Theft</category><category>LLM Security</category><category>Industry News</category><category>Research</category><category>AML.T0012 - Valid Accounts</category><category>AML.T0040 - ML Model Inference API Access</category><category>AML.T0044 - Full ML Model Access</category><category>AML.T0010 - ML Supply Chain Compromise</category><description>A single threat actor (IP 81.168.83.103) has been systematically scanning internet-facing systems since at least January 2026, specifically targeting credential files, API tokens, and configuration data associated with popular AI platforms including OpenAI, Anthropic Claude, HuggingFace, and the Openclaw/Clawdbot tools. The campaign focuses on harvesting AI API credentials and secrets stored in predictable file paths, representing a targeted reconnaissance effort against AI model deployments. If successful, these probes could enable API key theft, model access abuse, and broader compromise of AI-integrated systems.</description></item><item><title>How Hackers Are Thinking About AI</title><link>https://gridthegrey.com/posts/how-hackers-are-thinking-about-ai/</link><pubDate>Tue, 14 Apr 2026 16:52:07 +0000</pubDate><guid>https://gridthegrey.com/posts/how-hackers-are-thinking-about-ai/</guid><category>Threat Level: MEDIUM</category><category>Research</category><category>LLM Security</category><category>Industry News</category><category>AML.T0047 - ML-Enabled Product or Service</category><category>AML.T0054 - LLM Jailbreak</category><category>AML.T0051 - LLM Prompt Injection</category><category>AML.T0043 - Craft Adversarial Data</category><description>A new academic paper analysed over 160 cybercrime forum conversations to understand how threat actors are discussing and adopting AI tools for criminal purposes. The research documents both misuse of legitimate AI platforms and attempts to build bespoke criminal AI models, revealing early-stage diffusion of AI capabilities within cybercriminal communities. The findings carry practical implications for law enforcement and security practitioners monitoring the evolving AI-enabled threat landscape.</description></item><item><title>Your MTTD Looks Great. Your Post-Alert Gap Doesn't</title><link>https://gridthegrey.com/posts/your-mttd-looks-great-your-post-alert-gap-doesn-t/</link><pubDate>Tue, 14 Apr 2026 09:40:03 +0000</pubDate><guid>https://gridthegrey.com/posts/your-mttd-looks-great-your-post-alert-gap-doesn-t/</guid><category>Threat Level: HIGH</category><category>Agentic AI</category><category>LLM Security</category><category>Industry News</category><category>Research</category><category>AML.T0047 - ML-Enabled Product or Service</category><category>AML.T0040 - ML Model Inference API Access</category><category>AML.T0044 - Full ML Model Access</category><description>The article highlights a critical operational gap in SOC environments where AI-accelerated adversarial capabilities — including an Anthropic model restricted after autonomously exploiting zero-day vulnerabilities — are outpacing defender response workflows. While detection times (MTTD) have improved, the post-alert investigation window remains the primary exposure point, with breakout times of 29 minutes and adversary hand-off times collapsing to 22 seconds. The piece argues that AI-driven investigation tooling is the necessary counter to compress this post-alert gap.</description></item><item><title>CSA: CISOs Should Prepare for Post-Mythos Exploit Storm</title><link>https://gridthegrey.com/posts/csa-cisos-should-prepare-for-post-mythos-exploit-storm/</link><pubDate>Tue, 14 Apr 2026 08:19:18 +0000</pubDate><guid>https://gridthegrey.com/posts/csa-cisos-should-prepare-for-post-mythos-exploit-storm/</guid><category>Threat Level: HIGH</category><category>LLM Security</category><category>Jailbreaks</category><category>Prompt Injection</category><category>Agentic AI</category><category>Industry News</category><category>Research</category><category>Regulatory</category><category>AML.T0051 - LLM Prompt Injection</category><category>AML.T0054 - LLM Jailbreak</category><category>AML.T0056 - LLM Meta Prompt Extraction</category><category>AML.T0057 - LLM Data Leakage</category><category>AML.T0047 - ML-Enabled Product or Service</category><category>AML.T0040 - ML Model Inference API Access</category><description>The Cloud Security Alliance has issued a warning about an anticipated 'AI vulnerability storm' following the release of Anthropic's Claude Mythos model, urging CISOs to prepare defensive postures in advance of expected exploit activity. The advisory signals growing institutional concern that major LLM releases create systemic risk windows as adversaries probe new model capabilities and attack surfaces. Security leaders are being advised to treat post-release periods of frontier AI models as high-alert intervals requiring elevated monitoring and response readiness.</description></item><item><title>OWASP GenAI Security Project Gets Update, New Tools Matrix</title><link>https://gridthegrey.com/posts/owasp-genai-security-project-gets-update-new-tools-matrix/</link><pubDate>Tue, 14 Apr 2026 08:18:19 +0000</pubDate><guid>https://gridthegrey.com/posts/owasp-genai-security-project-gets-update-new-tools-matrix/</guid><category>Threat Level: MEDIUM</category><category>LLM Security</category><category>Agentic AI</category><category>Regulatory</category><category>Industry News</category><category>Research</category><category>AML.T0051 - LLM Prompt Injection</category><category>AML.T0054 - LLM Jailbreak</category><category>AML.T0057 - LLM Data Leakage</category><category>AML.T0047 - ML-Enabled Product or Service</category><category>AML.T0056 - LLM Meta Prompt Extraction</category><category>AML.T0010 - ML Supply Chain Compromise</category><description>OWASP has updated its GenAI Security Project to formally recognise 21 generative AI risks, releasing a new tools matrix to help organisations structure their defences. The update notably distinguishes between securing traditional GenAI systems and the emerging attack surface presented by agentic AI architectures. This guidance represents a significant standards-level acknowledgement that agentic AI requires its own dedicated security posture.</description></item><item><title>OpenAI Impacted by North Korea-Linked Axios Supply Chain Hack</title><link>https://gridthegrey.com/posts/openai-impacted-by-north-korea-linked-axios-supply-chain-hack/</link><pubDate>Tue, 14 Apr 2026 07:39:02 +0000</pubDate><guid>https://gridthegrey.com/posts/openai-impacted-by-north-korea-linked-axios-supply-chain-hack/</guid><category>Threat Level: HIGH</category><category>Supply Chain</category><category>Industry News</category><category>LLM Security</category><category>AML.T0010 - ML Supply Chain Compromise</category><category>AML.T0047 - ML-Enabled Product or Service</category><description>OpenAI has been impacted by a supply chain attack attributed to North Korea-linked threat actors, involving a compromised macOS code signing certificate associated with the Axios JavaScript library. The incident highlights the vulnerability of major AI platforms to upstream software supply chain compromises, which could expose users to malicious code distributed through trusted tooling. As a leading AI infrastructure provider, any compromise of OpenAI's build or distribution pipeline carries significant downstream risk for enterprises relying on its models and APIs.</description></item><item><title>Python Supply-Chain Compromise</title><link>https://gridthegrey.com/posts/python-supply-chain-compromise/</link><pubDate>Mon, 13 Apr 2026 15:41:27 +0000</pubDate><guid>https://gridthegrey.com/posts/python-supply-chain-compromise/</guid><category>Threat Level: HIGH</category><category>Supply Chain</category><category>LLM Security</category><category>Industry News</category><category>AML.T0010 - ML Supply Chain Compromise</category><category>AML.T0018 - Backdoor ML Model</category><category>AML.T0047 - ML-Enabled Product or Service</category><description>A malicious supply chain attack was discovered in litellm version 1.82.8, a widely-used Python library that serves as a unified interface for interacting with large language model APIs. The compromised package contained a hidden .pth file executing arbitrary code on every Python interpreter startup, meaning any developer or AI system relying on litellm could be silently compromised without triggering an explicit import. Given litellm's central role in LLM-powered application stacks, this attack vector poses significant risk to AI pipeline integrity, credential theft, and downstream model infrastructure.</description></item><item><title>Over 1,000 Exposed ComfyUI Instances Targeted in Cryptomining Botnet Campaign</title><link>https://gridthegrey.com/posts/over-1000-exposed-comfyui-instances-targeted-in-cryptomining-botnet-campaign/</link><pubDate>Mon, 13 Apr 2026 14:44:56 +0000</pubDate><guid>https://gridthegrey.com/posts/over-1000-exposed-comfyui-instances-targeted-in-cryptomining-botnet-campaign/</guid><category>Threat Level: HIGH</category><category>Supply Chain</category><category>Industry News</category><category>LLM Security</category><category>AML.T0010 - ML Supply Chain Compromise</category><category>AML.T0047 - ML-Enabled Product or Service</category><category>AML.T0040 - ML Model Inference API Access</category><description>Threat actors are actively exploiting internet-exposed ComfyUI instances — a popular AI image generation platform — by abusing its custom node execution feature to achieve unauthenticated remote code execution. Over 1,000 publicly accessible instances have been identified as targets, with compromised hosts enrolled in Monero and Conflux cryptomining operations and a Hysteria V2 proxy botnet. The attack highlights critical supply chain and insecure plugin design risks inherent in AI/ML tooling ecosystems.</description></item></channel></rss>