Microsoft's NLWeb AI Protocol Exposed: Path Traversal Flaw Risks LLM 'Brains'
Share this article
Microsoft's vision for an AI-powered 'Agentic Web' faces its first major security crisis, as researchers exposed a critical vulnerability in its newly launched NLWeb protocol that could compromise the core intelligence of AI systems. The flaw, discovered just weeks after NLWeb's high-profile debut at Microsoft Build 2025, allowed attackers to steal sensitive API keys for large language models (LLMs) through a simple malformed URL—a basic security oversight with catastrophic implications.
NLWeb was pitched as revolutionary infrastructure to democratize AI search capabilities, enabling websites and apps to integrate ChatGPT-like functionality cheaply. Dubbed "HTML for the Agentic Web," it promised to reshape digital interactions for early adopters like Shopify and TripAdvisor. Yet security researchers Aonan Guan and Lei Wang uncovered a path traversal vulnerability in its implementation that exposed critical system files.
"An attacker doesn’t just steal a credential; they steal the agent’s ability to think, reason, and act," said Guan, a senior cloud security engineer who independently discovered the flaw. "Compromising LLM API keys could lead to massive financial loss from API abuse or creation of malicious clones."
The vulnerability allowed unauthenticated access to .env files containing keys for services like OpenAI's GPT-4 and Google's Gemini—effectively granting attackers control over an AI agent's cognitive engine. Path traversal flaws, while well-documented in traditional web security, take on new gravity in AI systems where API keys represent direct access to expensive, powerful models.
Microsoft patched the open-source repository on July 1st after being notified on May 28th but notably declined to issue a CVE (Common Vulnerabilities and Exposures) identifier. This omission complicates vulnerability tracking for enterprises. While Microsoft claims the flawed code isn't used in its products, Guan warns that public NLWeb deployments remain at risk until actively updated.
The incident exposes a tension in Microsoft's AI strategy. As the company races to embed protocols like NLWeb and Model Context Protocol (MCP) across Windows and cloud services, security researchers warn that foundational safeguards are lagging. Path traversal—a vulnerability type documented since the 1990s—shouldn't exist in flagship AI infrastructure.
For developers, this breach underscores non-negotiable priorities in the agentic web era:
1. Secrets management: LLM API keys require vault-level protection
2. Input validation: AI endpoints demand stricter sanitation than traditional web apps
3. Supply chain vigilance: Open-source AI components need adversarial testing pre-deployment
Microsoft's stumble reveals a hard truth: securing AI's 'thinking' layer requires revisiting security fundamentals. As protocols like NLWeb weave LLMs into the web's fabric, yesterday's OWASP Top 10 vulnerabilities become existential threats to tomorrow's intelligent systems.