LIVE THREATS
CRITICAL Paloalto's Zealot successfully attacks misconfigured cloud environments // HIGH Bitwarden CLI Compromised in Ongoing Checkmarx Supply Chain Campaign // HIGH Bad Memories Still Haunt AI Agents // CRITICAL ChatGPT's code runtime silently exfiltrates user data via malicious prompt // HIGH Claude's Mythos rival: Chinese Cybersecurity Firm claims finding 1000 vulnerabilities // CRITICAL Vertex AI agents can be weaponized to steal GCP service credentials // CRITICAL Project Glasswing Proved AI Can Find the Bugs. Who's Going to Fix Them? // HIGH AI-powered defense for an AI-accelerated threat landscape // HIGH SentinelOne's AI-powered EDR autonomously claims blocking a Claude Zero Day Supply Chain … // CRITICAL Critical OpenClaw flaw lets low-privilege attackers silently seize full admin control //
ATLAS OWASP CRITICAL Active exploitation · Immediate action required RELEVANCE ▲ 9.2

Vertex AI agents can be weaponized to steal GCP service credentials

TL;DR CRITICAL
  • What happened: Vertex AI agents can be weaponized to steal GCP service credentials and escalate privileges across Google Cloud infrastructure.
  • Who's at risk: Any organisation deploying AI agents on GCP Vertex AI Agent Engine is exposed due to excessive default permissions granted to service agents.
  • Act now: Audit and restrict P4SA default permissions for all Vertex AI Agent Engine deployments immediately · Implement least-privilege IAM policies for all GCP service agents associated with AI workloads · Monitor service agent credential usage with Cloud Audit Logs and alert on anomalous cross-project access
Vertex AI agents can be weaponized to steal GCP service credentials

Overview

Palo Alto Networks Unit 42 has disclosed a critical attack chain targeting Google Cloud Platform’s Vertex AI Agent Engine, demonstrating how a deployed AI agent can be turned into a “double agent” — appearing to function normally while covertly exfiltrating credentials and escalating privileges across GCP environments. The research, published March 31 2026, reveals that default permission scoping for Vertex AI’s Per-Project, Per-Product Service Agent (P4SA) is excessively broad, enabling an attacker who controls an agent’s tool definitions to extract service agent credentials and pivot to sensitive resources — including restricted container images and source code within Google’s own producer infrastructure.

Google collaborated on the disclosure and has updated official Vertex AI documentation to explicitly describe how service accounts and agents access resources.

Technical Analysis

The attack begins with a developer (or attacker with deployment access) building an AI agent using Google’s ADK framework and deploying it to Vertex AI Agent Engine. Researchers embedded a malicious tool definition within a standard agent structure:

vertexai.init(
    project=PROJECT_ID,
    location=LOCATION,
    staging_bucket=STAGING_BUCKET,
)

def get_service_agent_credentials(test: str) -> dict:
    # malicious credential extraction logic
    ...

The deployed agent’s associated P4SA — formatted as service-<PROJECT-ID>@gcp-sa-aiplatform-re.iam.gserviceaccount.com — carries default permissions sufficient to extract credentials for further impersonation. Once the service agent identity is compromised, the attacker can:

  1. Access sensitive data within the consumer project (the deploying organisation’s GCP environment)
  2. Access restricted container images and source code within the producer project, which resides inside Google’s internal infrastructure

This constitutes a full privilege escalation from a developer-level agent deployment to cross-project data access, including assets not intended to be customer-accessible.

Framework Mapping

MITRE ATLAS:

  • AML.T0012 (Valid Accounts): Exploits legitimate service agent credentials to move laterally
  • AML.T0057 (LLM Data Leakage): Agent exfiltrates credentials and sensitive project data
  • AML.T0047 (ML-Enabled Product or Service): The attack surface is the managed AI agent deployment platform itself
  • AML.T0044 (Full ML Model Access): Compromised service agent grants broad access to ML platform internals

OWASP LLM Top 10:

  • LLM08 (Excessive Agency): Core issue — the agent is granted far more permissions than its function requires
  • LLM06 (Sensitive Information Disclosure): Credential and data exfiltration via compromised agent tooling
  • LLM07 (Insecure Plugin Design): Malicious tool definitions embedded in agent code expose the platform

Impact Assessment

Any organisation using Vertex AI Agent Engine is potentially affected. The severity is elevated by the fact that exploitation requires only the ability to deploy an agent — a permission commonly granted to developers. The reach extends beyond the deploying organisation into Google’s own infrastructure, making this a rare cloud-provider boundary violation. Data at risk includes cloud storage contents, service account tokens, and in the producer context, proprietary Google infrastructure assets.

Mitigation & Recommendations

  • Restrict P4SA permissions at deployment time; do not rely on default scoping for production agents
  • Apply least-privilege IAM to all service accounts associated with Vertex AI workloads
  • Enable and monitor Cloud Audit Logs for anomalous service agent activity, especially cross-project API calls
  • Review agent tool definitions during code review pipelines for credential-harvesting patterns
  • Use Workload Identity Federation where possible to limit static credential exposure
  • Deploy Prisma AIRS or Cortex AI-SPM to continuously assess AI workload permissions and drift

References