LIVE THREATS
HIGH Adversaries Leverage LLMs to Accelerate Exploit Development and Attack Automation // CRITICAL AI-Developed Zero-Day Exploit Used in Mass Exploitation Attempt, Mandiant Warns // CRITICAL AI-Generated Zero-Day Exploit Bypasses 2FA in First Confirmed Wild Use // MEDIUM LLMs Demonstrate Strong Capability for Covert Text Steganography // CRITICAL Typosquatted OpenAI Repo on Hugging Face Delivered Rust Infostealer to 244K Users // HIGH Fake OpenAI Repository on Hugging Face Delivers Rust-Based Infostealer // HIGH ClaudeBleed Flaw Lets Rogue Chrome Extensions Hijack AI Agent // HIGH Claude Mythos AI-Assisted Fuzzing Uncovers 423 Firefox Security Bugs in One Month // HIGH Fake Claude AI Site Used to Distribute Beagle Backdoor and PlugX Malware // HIGH Malicious Repos Trigger Silent Code Execution in Claude, Cursor, Gemini CLIs //
ATLAS OWASP CRITICAL Active exploitation · Immediate action required RELEVANCE ▲ 9.2

AI-Developed Zero-Day Exploit Used in Mass Exploitation Attempt, Mandiant Warns

TL;DR CRITICAL
  • What happened: GTIG confirms first AI-generated zero-day exploit used by criminal actor targeting mass exploitation.
  • Who's at risk: Enterprise networks, critical infrastructure operators, and AI service providers are most exposed due to AI-accelerated exploit development and obfuscated LLM abuse pipelines.
  • Act now: Audit LLM API access controls and monitor for anonymised or middleware-proxied access patterns · Accelerate threat hunting for polymorphic and AI-obfuscated malware variants using behavioural detection · Treat AI-assisted vulnerability discovery as an active threat driver and prioritise patch cadence accordingly
AI-Developed Zero-Day Exploit Used in Mass Exploitation Attempt, Mandiant Warns

Overview

Google’s Threat Intelligence Group (GTIG) has published its latest AI Threat Tracker report, documenting a significant and measurable escalation in adversarial AI capability across criminal and nation-state actors. Most critically, the report confirms the first observed instance of a threat actor deploying a zero-day exploit believed to have been developed using generative AI — a development that represents a genuine inflection point in the offensive AI landscape. The criminal actor planned a mass exploitation campaign, but GTIG’s proactive counter-discovery appears to have disrupted the operation before deployment.

The report covers five distinct threat vectors: AI-generated exploit development, AI-augmented malware and infrastructure development for defense evasion, autonomous malware operations, AI-assisted research and information operations, and obfuscated LLM access schemes.

Technical Analysis

AI-Generated Zero-Day Exploit: GTIG assessed with confidence that a zero-day exploit was developed with AI assistance, lowering the traditional skill barrier for novel exploit creation. This signals that AI can now compress the time from vulnerability discovery to weaponised exploit.

PROMPTSPY — Autonomous Malware: A newly documented malware family, PROMPTSPY, demonstrates autonomous attack orchestration by using an integrated AI model to interpret system states and dynamically generate commands at runtime. Rather than executing a static payload, PROMPTSPY adapts to the victim environment, effectively offloading operational decision-making to an AI layer. This represents a qualitative shift from traditional malware design.

Polymorphic Malware and Defense Evasion: Suspected Russia-nexus actors have used AI-driven coding workflows to produce infrastructure suites and polymorphic malware at accelerated development cycles. Techniques include AI-generated decoy logic embedded in malware and dynamic obfuscation networks that complicate signature-based detection.

Obfuscated LLM Access: Threat actors are procuring anonymised, premium-tier access to frontier AI models via professionalized middleware and automated registration pipelines. This allows adversaries to bypass usage restrictions, content policies, and rate limits while maintaining operational anonymity.

Information Operations: Pro-Russia campaign “Operation Overload” exemplifies AI-enabled IO, using generative models to produce synthetic media and deepfake content at industrial scale, fabricating digital consensus across target populations.

Framework Mapping

  • AML.T0047 (ML-Enabled Product or Service): PROMPTSPY directly integrates LLM inference into malware execution logic, meeting this classification precisely.
  • AML.T0040 (ML Model Inference API Access): Obfuscated LLM access pipelines represent systematic abuse of model APIs to circumvent controls.
  • AML.T0015 (Evade ML Model): Polymorphic, AI-generated obfuscation is specifically designed to defeat ML-based detection systems.
  • LLM08 (Excessive Agency): PROMPTSPY’s autonomous command generation with minimal human oversight is a direct manifestation of excessive AI agency in a weaponised context.
  • LLM05 (Supply Chain Vulnerabilities): Middleware-based LLM access pipelines introduce supply chain risk for AI service providers.

Impact Assessment

The confirmation of an AI-generated zero-day exploit is the most consequential finding. If adversaries can reliably use AI to discover and weaponise vulnerabilities, defensive patch cycles — already under pressure — face a structurally faster adversarial tempo. PROMPTSPY’s autonomous command execution raises the prospect of scaled, low-cost intrusion operations that adapt in real time. Nation-state interest from PRC and DPRK actors in AI-assisted vulnerability research suggests this capability will proliferate across well-resourced adversaries within a short timeframe.

Mitigation & Recommendations

  • Harden LLM API access: Implement strict API key management, anomaly detection on inference patterns, and monitor for middleware proxying or automated bulk registration.
  • Deploy behavioural detection: Move beyond signature-based AV for malware detection; prioritise EDR solutions capable of identifying dynamic, AI-generated command sequences.
  • Accelerate patch prioritisation: Treat AI-assisted vuln discovery as an active force multiplier for adversaries; reduce mean time to patch for high-severity CVEs.
  • Counter deepfake IO: Invest in synthetic media detection capabilities and establish provenance verification for sensitive communications.
  • Threat hunt for PROMPTSPY indicators: Review GTIG’s published IOCs and hunt for AI-integrated malware behaviour across endpoint telemetry.

References

◉ AI THREAT BRIEFING

Stay ahead of the threat.

Twice-weekly digest of critical AI security developments — every story mapped to MITRE ATLAS and OWASP LLM Top 10. Free.

No spam. Unsubscribe anytime.