LIVE THREATS
CRITICAL Bleeding Llama Flaw Exposes 300,000 Ollama Servers to Unauthenticated Data Theft // MEDIUM CrowdStrike Researcher Details AI Jailbreaking and Data Poisoning Techniques // HIGH Mass Scan Reveals Widespread Authentication Failures Across Exposed AI Infrastructure // HIGH Backdoored PyTorch Lightning Package Steals Cloud Credentials from AI Developers // HIGH Pentagon Deploys Classified AI Across Seven Tech Giants for Warfighter Systems // MEDIUM Cross-Machine AI Agent Relay Tool Expands Attack Surface for Developer Environments // HIGH Desktop Automation CLI Grants AI Agents Deep OS-Level Control // HIGH Frontier LLMs Now Autonomously Breach Corporate Networks in AISI Cyber Tests // HIGH Premature AI Agent Deployments Expose Production Systems to Destructive Actions // HIGH Anthropic Launches Claude Security to Close AI-Accelerated Exploit Window //
ATLAS OWASP HIGH Significant risk · Prioritise patching RELEVANCE ▲ 8.5

Mass Scan Reveals Widespread Authentication Failures Across Exposed AI Infrastructure

TL;DR HIGH
  • What happened: Over one million exposed AI services found running without authentication, leaking credentials and user data.
  • Who's at risk: Enterprises and developers self-hosting LLM infrastructure without hardening defaults are directly exposed to credential theft, data leakage, and model abuse.
  • Act now: Enable authentication on all self-hosted AI services before internet exposure · Rotate any API keys that may have been exposed in plaintext configurations · Audit agent platforms (Flowise, n8n) for unintended public access and restrict to VPN or internal networks
Mass Scan Reveals Widespread Authentication Failures Across Exposed AI Infrastructure

Overview

A large-scale internet scan of over two million hosts — yielding more than one million exposed AI services — has uncovered an alarming concentration of security failures across self-hosted LLM deployments. Conducted by the Intruder research team in the wake of the ClawdBot incident (a self-hosted AI assistant averaging 2.6 CVEs per day), the investigation found authentication absent by default, API keys exposed in plaintext, and agent management platforms open to unauthenticated public access. The findings represent one of the broadest empirical assessments of real-world AI infrastructure security to date.

Technical Analysis

The core failure pattern is straightforward but consequential: many popular self-hosted AI frameworks ship without authentication enabled by default. Operators deploying these tools out-of-the-box inherit this insecure posture and frequently expose services directly to the internet without remediation.

Key findings include:

  • Exposed chatbot conversation histories via OpenUI instances, revealing sensitive enterprise dialogue without any access control.
  • Freely accessible multimodal LLMs available to anonymous users, enabling jailbreak attempts and misuse on third-party compute — including generation of illegal content — with no accountability trail.
  • Plaintext API key disclosure in Claude-powered chatbot configurations, enabling full upstream account compromise.
  • Flowise and n8n agent platforms exposed to the internet, revealing internal business logic, credential lists, and LLM workflow configurations to unauthenticated visitors.

The Flowise instances are particularly notable: while stored credential values were not returned to unauthenticated callers, the exposure of workflow structure, prompt templates, and credential metadata still constitutes significant information leakage for targeted attackers.

Framework Mapping

  • AML.T0040 (ML Model Inference API Access) and AML.T0044 (Full ML Model Access): Unauthenticated services grant anonymous actors direct inference access.
  • AML.T0054 (LLM Jailbreak): Open access enables adversaries to abuse exposed models for safety-bypassing use cases at scale.
  • AML.T0057 (LLM Data Leakage): Chat histories and workflow configs expose sensitive enterprise data.
  • LLM06 (Sensitive Information Disclosure): API keys and conversation data exposed via misconfigured deployments.
  • LLM07 (Insecure Plugin Design): Agent platforms (Flowise, n8n) expose credential and integration logic without access controls.

Impact Assessment

The affected population spans any organisation self-hosting LLM tooling — from startups using open-source frameworks to enterprises running internal AI assistants. Risks are tiered:

  1. Reputational: Exposure of NSFW or sensitive user conversations.
  2. Financial: Stolen API keys result in direct cost liability from upstream model providers.
  3. Operational: Exposed business logic in agent platforms enables competitive intelligence gathering or targeted attacks on dependent systems.
  4. Compliance: Chat history exposure likely constitutes a data breach under GDPR and similar frameworks.

Mitigation & Recommendations

  • Enable authentication immediately on all self-hosted AI services; treat unauthenticated deployment as a critical misconfiguration.
  • Audit certificate transparency logs for your domains to identify unintended AI service exposure.
  • Rotate all API keys associated with any previously exposed service, including upstream provider credentials (OpenAI, Anthropic, etc.).
  • Place agent management platforms (Flowise, n8n, similar) behind VPN or zero-trust access policies; they should never be internet-facing without authentication.
  • Review default configurations for every AI framework before deployment — assume defaults are insecure.
  • Implement network segmentation to prevent lateral movement from compromised AI infrastructure to core systems.

References

◉ AI THREAT BRIEFING

Stay ahead of the threat.

Twice-weekly digest of critical AI security developments — every story mapped to MITRE ATLAS and OWASP LLM Top 10. Free.

No spam. Unsubscribe anytime.