← Back to Blog

DLP Policies for Microsoft Copilot: Configuration Guide

E2E Agentic Bridge·March 2, 2026

DLP Policies for Microsoft Copilot: Configuration Guide

Data Loss Prevention and Microsoft Copilot have a complicated relationship. DLP policies were designed to prevent sensitive data from leaving your organization through email, chat, and file sharing. Copilot introduces an entirely new vector: AI-generated responses that can contain, summarize, or reference sensitive content within your own tenant.

The February 2026 incident — where a Copilot DLP interaction bug allowed sensitive content to bypass policy tips in certain Teams scenarios — made this painfully clear. DLP for Copilot isn't optional. It's a fundamental security control that requires deliberate configuration.

Here's the complete guide to configuring DLP policies that actually work with Copilot.

How DLP Interacts with Copilot

Microsoft Copilot processes user prompts by searching across M365 content — Exchange, SharePoint, OneDrive, Teams — and generating responses that synthesize information from multiple sources. DLP policies evaluate Copilot's responses before they're delivered to the user.

According to Microsoft's documentation on DLP and Copilot, the interaction works like this:

  1. User sends a prompt to Copilot
  2. Copilot retrieves relevant content from M365 services
  3. Copilot generates a response
  4. DLP evaluates the generated response against active policies
  5. If a policy match is found, the response is either blocked, modified, or a policy tip is displayed

The critical point: DLP evaluates the output, not the source content. If Copilot summarizes a document containing credit card numbers but the summary doesn't include the actual numbers, DLP won't trigger. If Copilot's response includes the numbers — or enough context to reconstruct them — DLP should catch it.

This means your DLP policies need to be configured for the types of content Copilot might generate, not just the types of content that exist in your environment.

Prerequisites for Copilot DLP

Before configuring DLP policies for Copilot, ensure you have:

  • Microsoft 365 E5 or Microsoft 365 E5 Compliance license (DLP for Copilot requires E5-level licensing)
  • Microsoft Purview compliance portal access with DLP administrator role
  • Copilot deployed to at least a pilot group (so you can test policies)
  • Audit logging enabled in Microsoft Purview (for DLP incident investigation)

If you're on E3, you get basic DLP for Exchange and SharePoint but not the advanced Copilot-specific controls. The E5 requirement is non-negotiable for comprehensive Copilot DLP.

Step 1: Define Your Sensitive Information Types

DLP policies rely on Sensitive Information Types (SITs) to identify content that needs protection. Microsoft provides 300+ built-in SITs covering common patterns like credit card numbers, Social Security numbers, and passport numbers.

For Copilot, you'll likely need custom SITs beyond the built-in ones. Copilot can generate content that references sensitive information in ways that built-in patterns don't catch.

Create Custom SITs for Your Organization

Navigate to Microsoft Purview > Data Classification > Classifiers > Sensitive info types and create custom types for:

Internal project code names — If your M&A deal is code-named "Project Phoenix," Copilot might reference it in responses. Create a SIT that matches the code name and related terms.

Internal financial metrics — Revenue figures, margin percentages, customer counts, and other metrics that shouldn't appear in Copilot responses to unauthorized users.

Employee identifiers — Internal employee IDs, badge numbers, or other identifiers that go beyond standard PII patterns.

Customer-specific terms — Contract terms, SLA details, or customer names that are subject to confidentiality agreements.

Configure SIT Confidence Levels

Each SIT has confidence levels (low, medium, high) based on how much corroborating evidence is found. For Copilot DLP, start with medium confidence to reduce false positives:

  • High confidence: Exact pattern match plus multiple corroborating elements. Use for blocking actions.
  • Medium confidence: Pattern match with some corroboration. Use for policy tips and user notifications.
  • Low confidence: Pattern match alone. Use for logging and monitoring only.

Starting too aggressively with high-confidence-only policies means you'll miss Copilot responses that contain sensitive data in summarized or paraphrased form. Starting with low confidence means constant false positives that users will learn to ignore.

Step 2: Create DLP Policies for Copilot Locations

In Microsoft Purview, navigate to Data loss prevention > Policies > Create policy.

Policy for Microsoft 365 Copilot Location

Microsoft added "Microsoft 365 Copilot (preview)" as a DLP location in late 2025. This location specifically monitors Copilot interactions:

  1. Choose policy template: Start with "Custom policy" for maximum control
  2. Name your policy: Use a clear naming convention like "DLP-Copilot-Financial-Data"
  3. Choose locations: Select "Microsoft 365 Copilot" and optionally "Microsoft Teams" (for Copilot in Teams)
  4. Define policy settings: Configure rules based on your SITs

Configure Policy Rules

Create rules within the policy for different sensitivity levels:

Rule 1: Block high-sensitivity content

  • Conditions: Content contains SITs at high confidence (SSN, credit card, bank account)
  • Actions: Block Copilot from delivering the response
  • Notifications: Notify the user that the response was blocked due to DLP policy
  • Incident reports: Generate incident report for compliance team

Rule 2: Warn on medium-sensitivity content

  • Conditions: Content contains SITs at medium confidence (internal project names, financial metrics)
  • Actions: Show policy tip but allow the user to override with justification
  • Notifications: Display policy tip explaining why the content was flagged
  • Incident reports: Log for review but don't escalate automatically

Rule 3: Log low-sensitivity matches

  • Conditions: Content contains SITs at low confidence
  • Actions: Allow but log the interaction
  • Notifications: None (to avoid alert fatigue)
  • Incident reports: Include in weekly summary report

Set User Override Options

For medium-sensitivity rules, configure override options carefully:

  • Allow override with business justification: User can proceed after explaining why they need the content
  • Allow override without justification: Not recommended for Copilot — too easy to click through
  • Report false positive: Allow users to flag incorrect matches, which helps you tune SITs

Overrides generate audit log entries that your compliance team should review weekly.

Step 3: Configure Policy Tips for Copilot

Policy tips are the user-facing notifications that appear when DLP is triggered. For Copilot, these need to be clear and actionable because users might not understand why their AI response was blocked.

Customize Policy Tip Text

In your DLP rule configuration, customize the policy tip text:

For blocked content:

"Copilot's response was blocked because it contains sensitive information protected by organizational policy. If you need this information, access the source document directly through approved channels."

For warned content:

"Copilot's response may contain sensitive information. Please review before using this information externally. Click 'Override' if you have a business need for this content."

Generic policy tips like "This content violates organizational policy" are useless. Users need to know what was flagged and what to do instead.

Test Policy Tips

Before deploying to production, test your policies in simulation mode:

  1. Set the policy to "Test it out first" mode
  2. Have test users run prompts that should trigger each rule
  3. Review the DLP alerts and policy tip behavior
  4. Adjust confidence levels and SIT patterns based on results
  5. Run for at least two weeks before switching to enforcement

Step 4: DLP for Copilot in Specific Workloads

Copilot operates across multiple M365 workloads, and each has slightly different DLP behavior.

Copilot in Teams

DLP for Copilot in Teams monitors both:

  • Copilot responses in Teams chat
  • Copilot-generated meeting summaries and action items

Configure the Teams location in your DLP policy alongside the Copilot location. Teams DLP has been available longer and is more mature — it catches both human-generated and Copilot-generated content in chat messages.

Be aware that Copilot meeting summaries are particularly risky. A meeting where someone verbally mentions a client's financial details could result in a Copilot summary that includes those details in written form — now subject to DLP scanning.

Copilot in Word, Excel, and PowerPoint

Copilot in Office apps generates content within documents. DLP evaluates this content when the document is:

  • Saved to SharePoint or OneDrive (triggering DLP scanning)
  • Shared via email or Teams (triggering DLP on the sharing action)

Note that DLP doesn't evaluate Copilot's in-app suggestions in real-time. It only catches the content after it's been accepted and saved. This means there's a window where sensitive content exists in a document before DLP processes it.

Copilot in Outlook

Copilot email summarization and drafting is covered by Exchange DLP policies. If you already have DLP policies on Exchange Online, they'll evaluate Copilot-generated email drafts before sending.

However, email summarization (reading and summarizing received emails) happens within the Copilot context, not Exchange. You need the Copilot-specific DLP location to catch sensitive content in summaries.

Step 5: Monitor and Tune

DLP policies are never "set and forget." Copilot generates novel content patterns that your initial SITs might not catch, and overly aggressive policies create productivity friction that leads to workarounds.

Review DLP Incident Reports

Navigate to Microsoft Purview > Data loss prevention > Activity explorer to review DLP incidents:

  • Weekly: Review all blocked and warned incidents for false positives
  • Monthly: Analyze patterns — are certain SITs triggering too often? Not enough?
  • Quarterly: Full policy review with compliance and security teams

Key Metrics to Track

  • False positive rate: If more than 20% of DLP triggers are false positives, your SITs need tuning
  • Override rate: If users override warnings more than 50% of the time, either the policy is too aggressive or users aren't taking it seriously
  • Block rate: Track how often Copilot responses are fully blocked — a sudden spike indicates either a new data exposure pattern or a misconfigured policy

Adjust SIT Confidence Levels

Based on monitoring data, adjust confidence levels:

  • Too many false positives → increase required confidence level
  • Missing real sensitive content → decrease confidence level or add corroborating evidence patterns
  • Specific content types consistently missed → create new custom SITs

Step 6: Integrate with Broader Data Protection

DLP for Copilot doesn't exist in isolation. It should be part of your broader Microsoft Purview data protection strategy.

Combine with Sensitivity Labels

Sensitivity labels and DLP are complementary. Labels classify content at rest; DLP protects it in motion (including Copilot-generated motion). Configure DLP policies that reference sensitivity labels as conditions:

  • If Copilot accesses content labeled "Highly Confidential," apply stricter DLP evaluation to the response
  • If Copilot generates content that matches SITs associated with "Confidential" content, auto-apply a sensitivity label to the output

For sensitivity label configuration details, see our complete sensitivity labels guide.

Combine with Information Barriers

If your organization uses Microsoft Purview Information Barriers (common in financial services and legal), ensure they're configured to work with Copilot. Information Barriers prevent Copilot from accessing content across barrier boundaries, which reduces the scope of what DLP needs to evaluate. Our guide on Copilot governance frameworks covers how these controls work together.

Combine with Adaptive Protection

Microsoft Purview Adaptive Protection uses insider risk signals to dynamically adjust DLP policy enforcement. A user flagged as high-risk by Insider Risk Management automatically gets stricter DLP policies applied to their Copilot interactions.

This is particularly valuable for Copilot because a compromised account with Copilot access can exfiltrate data much faster than manual browsing. Adaptive Protection tightens controls automatically when risk indicators appear.

Common Pitfalls

Pitfall 1: Testing in production. Always use simulation mode first. A misconfigured DLP policy that blocks Copilot responses across your organization will generate a flood of helpdesk tickets and executive complaints.

Pitfall 2: Ignoring custom SITs. Built-in SITs cover regulated data (PII, PCI, HIPAA) but not your organization's proprietary information. If your crown jewels are trade secrets, algorithm details, or strategic plans, you need custom SITs.

Pitfall 3: Set and forget. Copilot's behavior evolves with model updates. Content patterns that your policies caught today might be expressed differently after the next model update. Schedule regular policy reviews.

Pitfall 4: Not communicating to users. Users who encounter DLP blocks without explanation will work around them — copy-pasting from source documents, using personal AI tools, or sharing content through unauthorized channels. Clear policy tips and user training prevent shadow workarounds.

Pitfall 5: Inconsistent coverage. If DLP covers Copilot in Teams but not Copilot in Word, users will shift their Copilot usage to the unmonitored workload. Apply policies consistently across all Copilot locations.

Take Action Now

DLP for Copilot isn't a future requirement — it's a current one. Every day Copilot runs without proper DLP policies is a day where sensitive data can be surfaced, summarized, and shared without controls.

Scan your Copilot DLP readiness → Our free assessment evaluates your current DLP configuration against Copilot-specific requirements and identifies gaps before they become incidents. Get your DLP baseline in minutes.