ChatGPT's code runtime silently exfiltrates user data via malicious prompt
Check Point Research disclosed a critical vulnerability in ChatGPT's code execution runtime that allows a single malicious prompt to establish a covert outbound exfiltration channel, bypassing …
AML.T0051 - LLM Prompt Injection
AML.T0057 - LLM Data Leakage
AML.T0047 - ML-Enabled Product or Service