← Back to Blog

Microsoft Copilot's DLP Bug: What the Confidential Email Leak Means for Your Enterprise

E2E Agentic Bridge·February 26, 2026

Microsoft Copilot's DLP Bug: What the Confidential Email Leak Means for Your Enterprise

For weeks, Microsoft 365 Copilot Chat was reading and summarizing confidential emails — emails that sensitivity labels and DLP policies were explicitly configured to protect. If your organization deploys Copilot, this is the incident you need to understand.

What Happened

On January 21, 2026, customers began reporting that Microsoft 365 Copilot Chat was surfacing content from emails marked with confidential sensitivity labels. Despite having Data Loss Prevention (DLP) policies in place — policies specifically designed to prevent Copilot from processing labeled content — the AI was summarizing these emails anyway.

Microsoft acknowledged the issue on February 3 in service health advisory CW1226324, attributing it to "a code issue [that] is allowing items in the Sent Items and Drafts folders to be picked up by Copilot even though confidential labels are set in place."

Let that sink in. For at least two weeks before Microsoft even acknowledged the problem — and potentially longer before customers noticed — Copilot was actively bypassing the very controls enterprises put in place to protect their most sensitive communications.

The bug was covered extensively by The Register, TechCrunch, Bleeping Computer, and CyberNews on February 18–19, 2026. Microsoft deployed a configuration update worldwide and stated it expected "full remediation" by February 24.

When pressed by TechCrunch, Microsoft declined to say how many customers were affected.

The Technical Details

Here's how the DLP policy for Copilot is supposed to work: administrators create a DLP rule in Microsoft Purview that says "any email or document stamped with the Confidential sensitivity label must be excluded from Copilot for Microsoft 365 processing." When working correctly, if you ask Copilot Chat about content protected by that label, it should respond that it found a reference but cannot disclose the content because it's marked as sensitive.

The bug broke this contract specifically for items in the Sent Items and Drafts folders in Outlook desktop. Copilot could read, summarize, and present the contents of confidential emails stored in these locations — completely ignoring the DLP policy.

Microsoft's official statement to The Register confirmed: "We identified and addressed an issue where Microsoft 365 Copilot Chat could return content from emails labeled confidential authored by a user and stored within their Draft and Sent Items in Outlook desktop."

Microsoft was quick to add that "this did not provide anyone access to information they weren't already authorized to see" — the emails in Sent Items and Drafts belonged to the user querying Copilot. That's a fair point on the access control side, but it entirely misses the point of DLP.

DLP policies exist because even authorized users shouldn't always be able to surface, copy, or redistribute certain content through certain channels. An executive's draft email about an upcoming acquisition shouldn't be summarizable by an AI tool that might cache, log, or present it in unexpected contexts. A policy is a policy, and Copilot violated it.

Why This Matters More Than Microsoft Admits

1. DLP Is a Compliance Requirement, Not a Nice-to-Have

For organizations in financial services, healthcare, legal, and government sectors, DLP isn't optional. It's a regulatory requirement. When a tool bypasses DLP — even briefly, even for content the user already had access to — it can constitute a compliance violation. Auditors don't care about the nuance of Sent Items versus Inbox. They care whether the control was enforced.

2. The "Access Control" Defense Is Incomplete

Microsoft's framing that "no unauthorized access occurred" is technically correct but strategically misleading. DLP operates on a different axis than access control. You might have access to read a document but be prohibited from copying it, sharing it externally, or processing it through AI tools. This bug broke the AI processing restriction. In regulated environments, that distinction matters.

3. This Is Exactly What Enterprises Feared

Since Microsoft 365 Copilot launched, the number one concern from security teams has been oversharing — the risk that AI tools would surface, aggregate, or expose sensitive information in ways that existing controls can't prevent. 72 percent of S&P 500 companies now cite AI as a material risk in their regulatory filings.

This bug validates those fears. Not in theory. In production, across Microsoft's global infrastructure, for weeks.

4. The European Parliament Already Saw This Coming

Just one day before this bug made headlines, the European Parliament's IT department told lawmakers it was blocking built-in AI features on work-issued devices, citing concerns that AI tools could upload confidential correspondence to the cloud. Their timing was either prescient or — more likely — informed by the same class of concerns this incident now proves justified.

5. Sensitivity Labels Have Inconsistent Enforcement

Microsoft's own documentation acknowledges that sensitivity labels "do not function in a consistent way" across applications. Specifically: "Although content with the configured sensitivity label will be excluded from Microsoft 365 Copilot in the named Office apps, the content remains available to Microsoft 365 Copilot for other scenarios. For example, in Teams, and in Microsoft 365 Copilot Chat."

Read that again. Microsoft is telling you, in their documentation, that labels don't protect content uniformly across the M365 ecosystem. DLP policies are supposed to fill that gap — and in this case, they failed.

The Oversharing Problem Is Structural

This bug is a symptom of a deeper issue. Microsoft 365 Copilot inherits permissions from the user and tenant. If your SharePoint sites have broken permissions, if your sensitivity labels aren't applied consistently, if your DLP policies have gaps — Copilot will find and exploit every one of those weaknesses. Not maliciously. By design.

The AI doesn't understand confidentiality. It understands access. And when even the DLP backstop fails, you have no safety net.

Organizations with 15 million paid Copilot seats are now being reminded — painfully — that deploying Copilot without first hardening your information architecture is a gamble. One that, as of January 2026, has provably gone wrong.

What to Do Right Now

If your organization uses or is planning to deploy Microsoft 365 Copilot, here's your immediate action list:

1. Verify Your DLP Policies Are Working

Don't assume they are. Create a test email with a confidential sensitivity label, send it, then query Copilot Chat about its contents. If Copilot summarizes it, the fix hasn't reached your tenant yet — or your policy has a different gap.

2. Check Service Health Advisory CW1226324

Log into the Microsoft 365 admin center and review the advisory. Confirm whether your tenant was affected and whether the remediation has been applied.

3. Audit Your Sensitivity Label Coverage

Labels only work if they're applied. Review your auto-labeling policies to ensure sensitive content is consistently tagged across email, SharePoint, OneDrive, and Teams. Manual labeling alone is not sufficient at enterprise scale.

4. Review Copilot's Access Scope

Understand exactly what Copilot can access in your tenant. This means auditing SharePoint permissions, Exchange access, OneDrive sharing settings, and Teams channel memberships. Oversharing at the permissions level becomes oversharing at the AI level.

5. Implement Restricted Content Discovery

Microsoft's Restricted Content Discovery (RCD) for SharePoint Online provides an additional layer of protection. If you're not using it, start now.

6. Run a Readiness Assessment Before (or After) Deployment

This is the most important step. A Copilot readiness assessment maps your data landscape, identifies oversharing risks, validates your DLP and sensitivity label configurations, and tells you where you're exposed — before an AI tool does it for you.

If you've already deployed Copilot without an assessment, it's not too late. But the window between "we should do this" and "we just had an incident" is closing fast. This bug proves it.

7. Monitor Microsoft's Post-Incident Report

Microsoft should publish a post-incident report (PIR) explaining what caused the code issue, when it was introduced, and what testing gaps allowed it to ship. Hold them accountable for transparency.

The Uncomfortable Truth

Microsoft 365 Copilot is a powerful productivity tool. It's also an amplifier — of good permissions and bad ones, of strong governance and weak governance, of working DLP policies and broken ones.

This incident didn't expose a flaw in the concept of enterprise AI. It exposed a flaw in the assumption that your existing controls will hold when AI starts pulling on every thread of your data fabric.

Every enterprise considering Copilot deployment — or already running it — should treat this as a wake-up call. Not to abandon AI, but to get serious about the foundation it runs on.


Don't Wait for the Next Bug to Find Your Gaps

The DLP bypass bug is fixed. The next one hasn't been found yet. And as we detailed in our analysis of the oversharing problem, DLP is only one layer of a much deeper security challenge. The question isn't whether your controls will be tested — it's whether they'll hold when they are.

Run a free Copilot readiness scan →

Understand your oversharing exposure, validate your DLP policies, and identify permission gaps. Use our complete readiness assessment checklist to systematically audit every layer before Microsoft's AI does it for you. It takes minutes, and it might save you from being the next headline.