Microsoft 365 Copilot Chat has been summarizing emails marked 'confidential' despite DLP policies, exposing a significant security flaw that allowed the AI to access and discuss sensitive information it was explicitly restricted from reading.
Microsoft 365 Copilot Chat has been caught summarizing emails labeled "confidential" even when data loss prevention (DLP) policies were specifically configured to block such access, according to a recent security notice from Redmond.

The issue, tracked as CW1226324, was first reported by customers on January 21, 2026, and affects how Copilot Chat processes sensitive email content. Despite sensitivity labels being applied to confidential emails and DLP policies being configured to prevent unauthorized access, the AI assistant continued to summarize and discuss these restricted messages in the Copilot Chat tab.
This security lapse highlights a fundamental inconsistency in how Microsoft's labeling system works across different applications. While sensitivity labels can exclude content from Microsoft 365 Copilot in named Office apps like Word and Excel, the same protection doesn't extend to Teams or Copilot Chat. As Microsoft's own documentation admits, "content with the configured sensitivity label will be excluded from Microsoft 365 Copilot in the named Office apps, the content remains available to Microsoft 365 Copilot for other scenarios."
The root cause appears to be a code issue that allowed items in sent items and draft folders to be processed by Copilot even when confidential labels were applied. This means that emails users believed were protected by DLP policies were still being read and summarized by the AI assistant.
Data Loss Prevention is supposed to be Microsoft's safeguard against oversharing in enterprise environments. The system monitors and protects against unauthorized data access across Microsoft 365 locations like Exchange and SharePoint, as well as endpoints and non-Microsoft cloud apps. In theory, DLP policies should be able to affect Microsoft 365 Copilot and Copilot Chat, but this incident demonstrates that the practical implementation falls short.
The security flaw is particularly concerning given that 72 percent of S&P 500 companies have cited AI as a material risk in their regulatory filings. This incident provides a concrete example of why corporate America is so worried about AI systems inadvertently exposing sensitive information.
Microsoft has acknowledged the problem and says it's in the process of remediating the issue. The company is contacting affected customers to verify the effectiveness of the fix, though a specific timeline for resolution has not been provided.
The incident raises broader questions about the reliability of AI systems in enterprise environments where data sensitivity is paramount. If an AI assistant can bypass configured security policies to access confidential information, it undermines the very purpose of having such policies in place.
For organizations relying on Microsoft 365 Copilot Chat, this security lapse serves as a stark reminder that AI systems, despite their advanced capabilities, can still have significant blind spots when it comes to respecting security boundaries. Until the issue is fully resolved, companies may need to reconsider how they use Copilot Chat with sensitive communications or implement additional safeguards to prevent unauthorized AI access to confidential information.

Comments
Please log in or register to join the discussion