LIVE THREATS
HIGH AI-powered defense for an AI-accelerated threat landscape // HIGH SentinelOne's AI-powered EDR autonomously claims blocking a Claude Zero Day Supply Chain … // CRITICAL Critical OpenClaw flaw lets low-privilege attackers silently seize full admin control // HIGH Moltbook breach: When Cross-App Permissions Stack into Risk // HIGH Prompt injection attacks can traverse Amazon Bedrock multi-agent hierarchies // MEDIUM CrabTrap: An LLM-as-a-judge HTTP proxy to secure agents in production // MEDIUM Claude Mythos identified 271 vulnerabilities in Firefox codebase // MEDIUM Claude system prompts as a git timeline // CRITICAL Google Fixes Critical RCE Flaw in AI-Based Antigravity Tool // HIGH Google Patches Antigravity IDE Flaw Enabling Prompt Injection Code Execution //
ATLAS OWASP HIGH Significant risk · Prioritise patching RELEVANCE ▲ 8.2

Moltbook breach: When Cross-App Permissions Stack into Risk

TL;DR HIGH
  • What happened: AI agents bridging multiple SaaS apps via OAuth create cross-app permission stacks invisible to standard access reviews.
  • Who's at risk: Any organisation deploying AI agents or MCP connectors across multiple SaaS platforms is exposed, especially where OAuth grants are provisioned without centralised identity governance.
  • Act now: Audit all non-human identities (bots, agents, service accounts) and map their cross-application OAuth scopes · Enforce zero-trust principles for AI agent permissions — scope tokens to least privilege and revoke unused grants immediately · Implement cross-app access review tooling capable of reasoning about combined permission sets across integrated applications
Moltbook breach: When Cross-App Permissions Stack into Risk

Overview

On 31 January 2026, researchers disclosed a critical exposure at Moltbook, a social network purpose-built for AI agents. The platform left its database publicly accessible, leaking 35,000 email addresses, 1.5 million agent API tokens across 770,000 active agents, and — most critically — plaintext third-party credentials including OpenAI API keys stored alongside the agent tokens needed to hijack those agents entirely.

The incident is a textbook example of what security researchers are calling a toxic combination: a permission failure that spans two or more applications, bridged by an AI agent, OAuth grant, or MCP server, that no single application owner ever sanctioned as their own risk surface.

Technical Analysis

Toxic combinations emerge from a structural gap in how modern SaaS permissions are governed. Each individual application may pass a routine access review. The danger lives in the trust relationship between applications — the bridge — that forms at runtime through OAuth grants, API scopes, and tool-use chains.

Consider a representative scenario: a developer installs an MCP connector allowing an IDE to post code snippets to a Slack channel. The Slack administrator approves the bot; the IDE administrator approves the outbound connection. Neither administrator reviews the composite trust relationship that now exists between source-code editing and business messaging. The attack surface runs bidirectionally:

  • Inbound: Prompt injections crafted inside the IDE exfiltrate confidential code into Slack.
  • Outbound: Malicious instructions planted in Slack flow back into the IDE’s context on the next agent session.

The same shape appears whenever an AI agent bridges Google Drive and Salesforce, or any intermediary creates mutual trust between two platforms through a grant that appears normal in isolation.

Non-human identities — service accounts, bots, and AI agents — compound the problem because they hold persistent, broadly-scoped tokens with no human lifecycle attached. They outnumber human identities in most mature SaaS environments and are rarely provisioned through standard identity systems, making them invisible to conventional IAM tooling.

Framework Mapping

FrameworkReferenceRationale
MITRE ATLASAML.T0051 – LLM Prompt InjectionCross-app prompt injection via IDE-to-Slack MCP bridges
MITRE ATLASAML.T0057 – LLM Data LeakagePlaintext credential exposure in agent message stores
MITRE ATLASAML.T0012 – Valid AccountsHijacking agents using legitimately-issued tokens
OWASPLLM08 – Excessive AgencyAgents holding scopes beyond task requirements
OWASPLLM07 – Insecure Plugin DesignMCP connectors creating unreviewed cross-app trust
OWASPLLM06 – Sensitive Information DisclosureAPI keys and credentials stored in agent message tables

Impact Assessment

The Moltbook breach directly exposed credentials for external services — meaning the blast radius extended beyond the platform itself to every downstream API those keys could reach. At scale, this pattern threatens any organisation relying on AI agents integrated across productivity, development, and CRM tooling. The Cloud Security Alliance’s State of SaaS Security 2025 report noted that 56% of organisations are already concerned about SaaS-to-SaaS exposure, and AI agent proliferation is accelerating the problem faster than governance frameworks are adapting.

Mitigation & Recommendations

  1. Map all non-human identities — enumerate every agent, bot, and service account and document which OAuth scopes they hold across which applications.
  2. Apply least-privilege scoping — restrict agent tokens to the minimum scopes required per task; revoke any grants not actively in use.
  3. Adopt cross-app access review tooling — single-application IAM reviews are structurally insufficient; use tooling that reasons about combined permission sets across integrated platforms.
  4. Prohibit plaintext credential storage in agent contexts — enforce secrets management (e.g., vault-based injection) and audit agent message stores for credential leakage.
  5. Treat MCP servers as a governance boundary — require explicit security sign-off on any MCP connector that creates bidirectional trust between applications.

References