Overview
A threat actor created a fraudulent Hugging Face repository named Open-OSS/privacy-filter that typosquatted OpenAI’s legitimate ‘Privacy Filter’ project. Discovered by HiddenLayer researchers on May 7, 2026, the repository briefly reached the #1 spot on Hugging Face’s trending list and recorded approximately 244,000 downloads before the platform removed it following reports. The campaign demonstrates how adversaries are actively exploiting the trust and discoverability mechanics of AI model-sharing platforms to distribute malware at scale.
Technical Analysis
The attack employed a multi-stage delivery chain designed to evade detection:
Lure Layer: The repository copied OpenAI’s legitimate model card nearly verbatim, presenting a convincing facade to researchers and developers browsing trending AI tools.
Loader Script (
loader.py): A Python file included superficial AI-related code for camouflage. Behind this facade, it:- Disabled SSL certificate verification
- Decoded a base64-encoded URL pointing to an external resource
- Fetched and executed a JSON payload containing an embedded PowerShell command
PowerShell Stage: Executed silently in a hidden window, the command downloaded
start.bat, which:- Performed privilege escalation
- Downloaded the final payload (
sefirah) - Added the payload to Microsoft Defender’s exclusion list
- Executed the payload
Final Payload — Rust Infostealer (
sefirah): A capable Rust-based credential harvester targeting:- Browser data (cookies, passwords, session tokens, encryption keys) from Chromium and Gecko browsers
- Discord tokens, local databases, and master keys
- Cryptocurrency wallets and wallet browser extensions
- SSH, FTP, and VPN credentials including FileZilla configurations
- Sensitive local files and wallet seeds/keys
- System information and multi-monitor screenshots
Stolen data is compressed and exfiltrated to a C2 server at recargapopular[.]com. The malware also incorporates extensive anti-analysis capabilities, including VM, sandbox, and debugger detection.
Framework Mapping
- AML.T0010 — ML Supply Chain Compromise: The attack directly targets the AI/ML development pipeline by weaponising a trusted model-sharing platform to distribute malicious packages.
- AML.T0019 — Publish Poisoned Datasets/Repositories: The adversary published a poisoned repository with a near-identical model card to deceive users.
- AML.T0047 — ML-Enabled Product or Service: The attack exploits user trust in legitimate AI tooling ecosystems.
- LLM05 — Supply Chain Vulnerabilities: The incident is a textbook example of third-party AI component compromise through a trusted distribution channel.
Impact Assessment
With 244,000 downloads recorded before removal, the potential victim pool is significant. Any Windows user who installed and executed code from this repository may have had browser credentials, cryptocurrency assets, SSH/VPN configurations, and session tokens exfiltrated. The attack is particularly dangerous for AI researchers, MLOps engineers, and developers who routinely install packages from Hugging Face as part of their workflow and may not scrutinise loader scripts closely.
Mitigation & Recommendations
- Immediate: Check systems for the presence of
sefirahor related artefacts; rotate all credentials stored in affected browsers and SSH/VPN configurations. - Network: Block connections to
recargapopular[.]comand monitor for outbound traffic to unknown C2 infrastructure. - Process: Establish code review requirements for any Python scripts (
loader.pypatterns) downloaded from ML repositories before execution. - Platform Hygiene: Only install models from verified organisations on Hugging Face; cross-reference repositories against official vendor GitHub/documentation links.
- Detection: Deploy behavioural monitoring for PowerShell execution spawned from Python processes, particularly those running in hidden windows.