LIVE THREATS
HIGH Anthropic Launches Claude Security to Close AI-Accelerated Exploit Window // CRITICAL CVSS 10 Gemini CLI Flaw Turns CI/CD Pipelines Into RCE Attack Vectors // MEDIUM OpenAI Launches Phishing-Resistant Security Mode for High-Risk ChatGPT Accounts // HIGH UK AI Security Institute Finds GPT-5.5 Matches Claude Mythos in Cyber Capabilities // MEDIUM AI-Powered Honeypots Expose Blind Spots in Automated Malicious AI Agents // HIGH DPRK Actors Use Claude LLM to Inject Malware Into npm Supply Chain // CRITICAL SQL Injection in LiteLLM Proxy Exposes LLM Provider Keys Within 36 Hours // MEDIUM Agentic AI Defense Costs Spiral as Adversarial Attack Volume Surges // HIGH FIDO Alliance Launches Standards Push to Secure AI Agent Transactions // CRITICAL Pre-Auth SQLi Flaw in LiteLLM Gateway Actively Exploited to Steal AI Credentials //
ATLAS OWASP MEDIUM Moderate risk · Monitor closely RELEVANCE ▲ 6.2

OpenAI Launches Phishing-Resistant Security Mode for High-Risk ChatGPT Accounts

TL;DR MEDIUM
  • What happened: OpenAI launches optional hardened account mode blocking passwords, SMS recovery, and support-channel social engineering.
  • Who's at risk: Journalists, political dissidents, researchers, and security professionals whose ChatGPT/Codex accounts hold sensitive personal or professional context are most exposed to targeted account takeover.
  • Act now: Enable Advanced Account Security on any ChatGPT or Codex account holding sensitive or professional data · Provision at least two physical security keys or passkeys before activating the feature to avoid lockout · If enrolled in OpenAI's Trusted Access for Cyber programme, comply with mandatory enforcement before June 1 or configure phishing-resistant enterprise SSO
OpenAI Launches Phishing-Resistant Security Mode for High-Risk ChatGPT Accounts

Overview

OpenAI has announced Advanced Account Security, an optional hardened protection tier for ChatGPT and Codex accounts, targeting users who face an elevated risk of adversarial account compromise. The announcement, made on 30 April 2026, mirrors analogous programmes such as Google’s Advanced Protection Programme, which has existed for nearly a decade. The move is explicitly framed as part of OpenAI’s broader cybersecurity strategy and acknowledges that AI accounts increasingly sit at the centre of sensitive personal and professional workflows.

As AI platforms accumulate context about their users — from private queries to integrated tooling and agentic workflows — these accounts become high-value targets for nation-state actors, cybercriminals, and politically motivated attackers seeking intelligence or operational disruption.

Technical Analysis

Advanced Account Security enforces several layered controls:

  • Password elimination: Standard passwords are disabled. Users must register a minimum of two physical security keys or passkeys (FIDO2/WebAuthn), which are inherently phishing-resistant because they bind to the legitimate origin domain.
  • Recovery channel hardening: Email and SMS-based account recovery are removed entirely. Recovery is only possible via backup passkeys, recovery keys, or registered physical security keys — eliminating the most commonly abused recovery vectors.
  • Support isolation: OpenAI’s own support team loses the ability to perform account recovery actions. This is a critical control that closes the social engineering attack surface against helpdesk staff — a technique heavily exploited in high-profile breaches such as the Uber and MGM incidents.
  • Session tightening: Sign-in session durations are shortened, reducing the window of exposure from stolen session tokens.
  • Login alerting: Every new authentication event triggers an alert, enabling rapid detection of unauthorised access attempts.
  • Training opt-out by default: Conversations are excluded from model training by default, reducing the risk of sensitive data leakage into future model iterations.

OpenAI has partnered with Yubico to offer discounted YubiKey bundles to enrolled users, lowering the barrier to hardware key adoption.

Framework Mapping

  • AML.T0012 (Valid Accounts): The primary threat model addressed here is credential-based account takeover, where attackers obtain valid session credentials through phishing or social engineering to access AI inference APIs and stored conversation data.
  • AML.T0040 (ML Model Inference API Access): Compromised accounts provide unauthorised access to Codex and ChatGPT inference capabilities, which could be abused for scaled misuse or intelligence gathering.
  • LLM06 (Sensitive Information Disclosure): Accounts accumulating personal, professional, or organisational context represent a disclosure risk if compromised; the feature directly mitigates this.

Impact Assessment

The at-risk population explicitly identified by OpenAI includes journalists, elected officials, political dissidents, and security researchers — groups historically targeted by sophisticated threat actors. For these users, an account compromise could expose sensitive sources, strategic plans, or research findings. The mandatory enforcement for Trusted Access for Cyber programme members by June 1 also signals that OpenAI is treating privileged API access as a security perimeter worthy of strong authentication controls.

Mitigation & Recommendations

  1. Enrol in Advanced Account Security for any ChatGPT or Codex account used in sensitive or professional contexts.
  2. Provision hardware security keys (e.g., YubiKey 5 series) before enabling the feature to ensure account recovery paths are established.
  3. Audit connected integrations: review which third-party tools and workflows are linked to your account, as a compromise could cascade through connected services.
  4. Organisations deploying Codex should mandate phishing-resistant SSO and treat AI platform credentials with the same rigour as cloud IAM credentials.
  5. Security awareness: train staff to recognise that AI platform accounts are now a legitimate target for social engineering, not just traditional IT systems.

References

◉ AI THREAT BRIEFING

Stay ahead of the threat.

Twice-weekly digest of critical AI security developments — every story mapped to MITRE ATLAS and OWASP LLM Top 10. Free.

No spam. Unsubscribe anytime.