Docker has patched a critical vulnerability in its Ask Gordon AI assistant that allowed attackers to execute code and exfiltrate data by embedding malicious instructions in Docker image metadata labels.
Docker has addressed a critical security flaw in its Ask Gordon AI assistant that could have allowed attackers to execute arbitrary code and steal sensitive data through malicious Docker image metadata. The vulnerability, dubbed DockerDash by cybersecurity firm Noma Labs, was patched in Docker version 4.50.0 released in November 2025.
The Vulnerability: Meta-Context Injection
The flaw stems from how Ask Gordon processes Docker image metadata. The AI assistant treats unverified metadata labels as executable commands, creating a dangerous trust boundary violation. When a user queries Ask Gordon about a Docker image, the assistant reads all LABEL fields in the image metadata and forwards them to the MCP (Model Context Protocol) Gateway without proper validation.
"In DockerDash, a single malicious metadata label in a Docker image can be used to compromise your Docker environment through a simple three-stage attack: Gordon AI reads and interprets the malicious instruction, forwards it to the MCP Gateway, which then executes it through MCP tools," explained Sasi Levi, security research lead at Noma.
Attack Chain Explained
The exploitation process follows a straightforward three-step pattern:
- Image Creation: An attacker crafts a malicious Docker image with weaponized LABEL instructions in the Dockerfile
- Metadata Processing: When a victim queries Ask Gordon about the image, the AI reads and interprets the malicious instructions embedded in the LABEL fields
- Code Execution: Ask Gordon forwards the parsed instructions to the MCP Gateway, which executes them with the victim's Docker privileges
Impact and Risk
Successful exploitation could result in:
- Critical-impact remote code execution for cloud and CLI systems
- High-impact data exfiltration for desktop applications
- Access to sensitive information including installed tools, container details, Docker configuration, mounted directories, and network topology
The vulnerability highlights a fundamental issue with contextual trust in AI systems. MCP acts as a connective tissue between large language models and the local environment, but fails to distinguish between legitimate metadata and malicious instructions.
Additional Prompt Injection Flaw
Version 4.50.0 also resolves another prompt injection vulnerability discovered by Pillar Security. This flaw could have allowed attackers to hijack Ask Gordon and exfiltrate sensitive data by tampering with Docker Hub repository metadata with malicious instructions.
Mitigation and Recommendations
Docker has addressed these vulnerabilities through improved validation mechanisms in Ask Gordon version 4.50.0. However, the incident underscores the growing importance of AI Supply Chain Risk management.
"The DockerDash vulnerability underscores your need to treat AI Supply Chain Risk as a current core threat," Levi warned. "It proves that your trusted input sources can be used to hide malicious payloads that easily manipulate AI's execution path. Mitigating this new class of attacks requires implementing zero-trust validation on all contextual data provided to the AI model."
Users should immediately update to Docker Desktop version 4.50.0 or later to protect against these vulnerabilities. Organizations should also review their AI assistant implementations and ensure proper input validation and trust boundary enforcement.
This incident serves as a wake-up call for the industry, demonstrating how AI assistants can become attack vectors when they process untrusted data without adequate security controls. As AI integration becomes more prevalent in development tools, similar vulnerabilities may emerge across other platforms, making robust security validation essential for all AI-powered features.

Comments
Please log in or register to join the discussion