Microsoft acknowledges a bug in Copilot's work tab chat feature that's bypassing DLP policies to summarize confidential emails stored in Sent Items and Drafts folders.
Microsoft has confirmed a significant security bug in its Microsoft 365 Copilot AI assistant that has been exposing confidential emails to unauthorized summarization since late January 2026.
The Security Flaw
The issue, tracked under identifier CW1226324, affects the Copilot "work tab" chat feature within Microsoft 365 Copilot Chat. This AI-powered assistant, which Microsoft began rolling out to Word, Excel, PowerPoint, Outlook, and OneNote for business customers in September 2025, has been incorrectly processing email messages that should have been protected by confidentiality labels and data loss prevention (DLP) policies.
According to Microsoft's service alert, the bug allows Copilot to read and summarize emails stored in users' Sent Items and Drafts folders, even when these messages carry sensitivity labels explicitly designed to restrict access by automated tools. This represents a fundamental failure of the security controls that organizations rely on to protect sensitive information.
Technical Details of the Vulnerability
Microsoft attributes the problem to an unspecified "code issue" that permits items in Sent Items and Drafts folders to be accessed by Copilot despite the presence of confidential labels. The company began rolling out a fix in early February 2026, though as of mid-February, the deployment was still ongoing.
"A code issue is allowing items in the sent items and draft folders to be picked up by Copilot even though confidential labels are set in place," Microsoft stated in its advisory. The company is actively monitoring the deployment and reaching out to affected users to verify the fix's effectiveness.
Scope and Impact
Microsoft has not disclosed how many users or organizations were affected by this security lapse. The company only noted that "the scope of impact may change as the investigation continues." However, the incident has been classified as an advisory, suggesting the impact may be limited to a subset of users rather than a widespread breach.
This classification is significant because advisories are typically used for service issues with limited scope or impact, as opposed to more severe incidents that might warrant higher severity ratings.
Security Implications
The bug raises serious questions about the security architecture of AI assistants integrated into enterprise environments. Organizations implement DLP policies and sensitivity labels specifically to prevent unauthorized access to confidential information, and this vulnerability effectively bypasses those protections.
For businesses handling sensitive communications, the prospect of an AI assistant summarizing confidential emails without proper authorization represents a significant compliance and security risk. This could potentially expose trade secrets, financial information, legal communications, or personal data that should remain protected.
Microsoft's Response
Microsoft's response to the incident has been methodical, though some critics might argue it has been slow. The company confirmed the issue promptly after detection on January 21, 2026, and began working on a fix within days. However, the ongoing nature of the deployment as of mid-February suggests the remediation process is taking longer than initially anticipated.
"Users' email messages with a confidential label applied are being incorrectly processed by Microsoft 365 Copilot chat," Microsoft acknowledged in its service alert. "The Microsoft 365 Copilot 'work tab' Chat is summarizing email messages even though these email messages have a sensitivity label applied and a DLP policy is configured."
Broader Context
This incident occurs against the backdrop of increasing integration of AI assistants into enterprise productivity suites. As companies rush to deploy AI tools to enhance productivity, security concerns about data exposure and unauthorized access have become paramount.
The Copilot bug highlights the challenges of implementing AI systems that can access and process sensitive corporate data while maintaining strict security boundaries. It also underscores the importance of rigorous testing and validation of AI systems before deployment in enterprise environments.
What Organizations Should Do
While Microsoft works on the fix, organizations using Microsoft 365 Copilot should:
- Monitor their Microsoft 365 service alerts for updates on this issue
- Review their DLP policies and sensitivity label configurations
- Consider temporarily restricting Copilot access to email content if possible
- Document any potential exposure of confidential information
- Prepare for potential compliance reviews if sensitive data may have been exposed
Looking Forward
This incident serves as a reminder that AI integration in enterprise environments requires careful consideration of security implications. As AI assistants become more sophisticated and gain broader access to corporate data, ensuring they respect existing security boundaries becomes increasingly critical.
The resolution of this bug will likely influence how Microsoft and other vendors approach the security architecture of AI assistants in the future, potentially leading to more robust isolation mechanisms and stricter access controls for AI systems processing sensitive information.
For now, affected organizations must balance the productivity benefits of AI assistants against the security risks exposed by this vulnerability, while Microsoft works to fully remediate the issue and restore confidence in its security controls.

Comments
Please log in or register to join the discussion