Why Law Firms Are Quietly Disabling Microsoft Copilot
There's a pattern emerging in BigLaw and mid-size firms that Microsoft's sales team doesn't want to talk about. Firms that enthusiastically piloted Microsoft 365 Copilot in early 2024 are quietly pulling licenses, restricting access, or — in several cases — disabling the tool entirely.
Not because AI doesn't work. Because it works too well at the wrong things.
When you give an AI assistant unrestricted access to a law firm's document management system, you don't get a productivity tool. You get an attorney-client privilege violation waiting to happen, a Chinese wall breach on autopilot, and a malpractice insurer's nightmare scenario — all for the low price of $30 per user per month.
I've spent twenty years watching enterprise software get deployed into regulated industries by vendors who don't understand the regulatory landscape. Copilot in legal is the most spectacular example I've seen since a major bank tried to put client data in a public Slack workspace in 2019.
The US House of Representatives Already Figured This Out
In March 2024, the US House of Representatives banned Microsoft Copilot for all congressional staff. The Office of the Chief Administrative Officer classified it as a risk to "users with access to House data," specifically citing the potential for data leakage to non-approved cloud services.
Let that sink in. The US Congress — not exactly known for technological sophistication — looked at Copilot and said "no." Their concern wasn't hypothetical. Catherine Szpindor's office determined that Copilot's data processing model couldn't guarantee that House data wouldn't leak outside approved Microsoft 365 tenants.
If Congress won't trust Copilot with legislative drafts, why would a law firm trust it with privileged client communications?
The answer, of course, is that many firms didn't fully evaluate the risk before deploying. They saw "AI-powered productivity" and heard "$30/user/month" and assumed Microsoft had sorted out the security model. Microsoft had not.
The SharePoint Permission Problem No One Wants to Talk About
Here's the dirty secret of every law firm's Microsoft 365 environment: SharePoint permissions are a disaster.
This isn't unique to law firms, but the consequences are uniquely catastrophic in legal. According to a 2023 Varonis report, the average organization has 40,000 unique permissions across SharePoint and OneDrive, and over 50% of sensitive files are accessible to overly broad groups. A Concentric AI analysis found that roughly 1 in 10 files with sensitive data in enterprise M365 environments had overly permissive sharing.
In a typical corporation, oversharing means a marketing associate can see the finance team's budget spreadsheet. Embarrassing, maybe a governance issue, but rarely existential.
In a law firm, oversharing means:
- A first-year associate working for Client A can see privileged documents belonging to Client B — who happens to be Client A's opposing party in active litigation
- Lateral hires from opposing counsel's firm can access documents from matters where their former firm represents the other side
- Practice group boundaries dissolve — the M&A team sees litigation hold documents, the employment group sees IP prosecution files, the tax team sees criminal defense materials
Microsoft Copilot doesn't create these permission problems. It weaponizes them.
Before Copilot, finding a misfiled document in the wrong SharePoint site required someone to actively navigate to the wrong library and stumble across it. The probability was low. The exposure window was narrow.
With Copilot, a simple prompt like "summarize all documents related to Acme Corporation" will surface every document the user has access to — including documents they technically have SharePoint permissions to view but were never supposed to see. Copilot doesn't know about ethical walls. It doesn't understand that access ≠ authorization in a legal context. It just follows the permission graph wherever it leads.
A Concrete Scenario
Consider a mid-size firm with 200 attorneys. The firm represents both a pharmaceutical company (Client A) and a plaintiff in a product liability suit against that same company (Client B), screened by an ethical wall.
In the pre-Copilot world, the ethical wall was enforced by a combination of:
- A conflict check system that flagged the relationship
- Physical separation (different floors, different partners)
- IT-administered access controls on document management
- Training and professional responsibility obligations
The firm's iManage DMS properly walls off the matters. But when they migrated to Microsoft 365, someone created a SharePoint site for "Pharma Industry Research" that both practice groups can access. It contains general industry reports, but also a few memos that reference specific client strategies.
Pre-Copilot, those memos sat unread in a folder three levels deep. Post-Copilot, an associate on the Client B team asks Copilot: "What do we know about [pharma company]'s regulatory strategy?" Copilot helpfully surfaces the Client A memo. The ethical wall is breached. The firm now has a conflict that could result in disqualification from both matters.
This isn't a theoretical scenario. It's a Tuesday.
Attorney-Client Privilege in the Age of AI
Attorney-client privilege is the foundation of legal practice. It requires that communications between attorneys and clients remain confidential. Voluntary disclosure to a third party — even inadvertent disclosure — can waive the privilege entirely.
The question that keeps general counsels awake at night: Does sending privileged communications through Microsoft Copilot constitute disclosure to a third party?
Microsoft's position is that Copilot processes data within the Microsoft 365 compliance boundary and doesn't use customer data to train models. Their Data Protection Addendum and Product Terms make this contractually binding. But contract terms aren't the same as legal precedent, and no court has definitively ruled on whether AI processing of privileged material constitutes a waiver.
The ABA's Formal Opinion 477R (2017) already established that lawyers have an ethical obligation to make "reasonable efforts" to prevent unauthorized access to client information when using technology. That opinion was written about cloud storage and email. AI assistants that actively index, process, and surface privileged materials represent a qualitatively different risk.
Consider what Copilot actually does with a privileged email:
- Indexes it in Microsoft Graph, making it searchable
- Generates embeddings — mathematical representations of the content
- Surfaces it in responses to prompts that may come from anyone with sufficient permissions
- Potentially includes it in summaries alongside non-privileged material, creating a mixed document with unclear privilege status
Each of these steps introduces a vector for inadvertent disclosure. And in many jurisdictions, inadvertent disclosure analysis under Federal Rule of Evidence 502(b) looks at whether the holder took "reasonable steps to prevent disclosure." Deploying an AI tool that indexes privileged material alongside non-privileged material, without robust controls, is a hard sell for "reasonable steps."
Document Review Hallucinations: The Malpractice Multiplier
In June 2023, attorneys Steven Schwartz and Peter LoDuca of Levidow, Levidow & Oberman made international headlines when they submitted a legal brief containing six entirely fabricated case citations generated by ChatGPT. Judge P. Kevin Castel of the Southern District of New York sanctioned them and called the filing "an unprecedented circumstance" (Mata v. Avianca, No. 22-cv-1461).
That was ChatGPT — a general-purpose chatbot. Now imagine the same hallucination problem embedded in a tool that's integrated into every attorney's workflow, running inside Outlook, Word, Teams, and the firm's document management system.
Microsoft Copilot hallucinates. Every large language model hallucinates. Microsoft's own documentation acknowledges that "AI-generated content may be inaccurate" and recommends human review. But in the velocity-driven environment of modern legal practice, that disclaimer gets ignored approximately 100% of the time.
The specific risks in legal document review:
- Contract analysis: Copilot summarizes a 200-page merger agreement and misses — or invents — a material adverse change clause. The associate relies on the summary. The deal closes with an undiscovered liability.
- Due diligence: Copilot reviews thousands of documents in a data room and reports "no environmental liabilities found." In reality, it processed 80% of the documents and hallucinated a clean result for the remainder.
- Legal research: An attorney asks Copilot to find relevant case law. It returns a mix of real citations and fabricated ones. Unlike Westlaw or LexisNexis, Copilot doesn't distinguish between verified legal databases and its training data.
- Regulatory compliance: Copilot summarizes regulatory requirements and conflates rules from different jurisdictions, creating a compliance framework that's confidently wrong.
The malpractice exposure is staggering. Legal malpractice insurers are already adding AI-related questions to their applications. Some are requiring firms to disclose their AI usage policies as a condition of coverage. Firms that can't demonstrate adequate AI governance risk coverage exclusions or premium increases.
The 67% Problem
According to Gartner research, 67% of organizations cite security concerns as the primary barrier to generative AI adoption. In legal, that number should be higher — and the firms that are being honest with themselves know it.
A 2024 survey by the International Legal Technology Association found that while over 80% of law firms were exploring AI tools, fewer than 30% had implemented comprehensive AI governance frameworks. The gap between adoption interest and governance readiness is where disasters live.
The core issue isn't that Microsoft built a bad product. Copilot is technically impressive. The issue is that Microsoft built a product optimized for general enterprise productivity and is selling it into an industry with regulatory requirements that the product's architecture fundamentally cannot address without significant additional infrastructure.
What Microsoft doesn't tell you in the sales pitch:
- Copilot respects M365 permissions, not ethical walls. Your conflict management system and your SharePoint permissions are two different systems. Copilot only knows about one.
- Sensitivity labels are necessary but not sufficient. You can label documents, but labels require consistent application. One mislabeled document in a universe of millions defeats the entire system.
- Audit logging exists but isn't designed for legal compliance. Copilot interactions are logged in the Microsoft Purview audit log, but mapping those logs to specific ethical obligations requires custom tooling that doesn't exist out of the box.
- Data residency controls don't solve privilege issues. Keeping data in a specific geography doesn't prevent an associate in the New York office from accessing documents they shouldn't see from the Chicago office.
What Firms Actually Need
After watching half a dozen firms go through the Copilot deployment-and-retreat cycle, the pattern of what legal organizations actually need from AI is clear:
1. Matter-Level Access Controls
AI tools in legal need to understand that access is scoped to specific matters, not to organizational hierarchy. An attorney who is authorized to work on Matter 12345 should have AI assistance for Matter 12345 and only Matter 12345. This requires integration with the firm's conflict management and matter management systems — not just SharePoint permissions.
2. Privilege-Aware Processing
AI systems handling legal documents need to understand the concept of privilege. At minimum, this means:
- Never mixing privileged and non-privileged content in the same response
- Flagging potentially privileged material before surfacing it
- Maintaining privilege logs that track AI interactions with privileged documents
- Allowing privilege designations to propagate through AI-generated derivatives
3. Ethical Wall Enforcement
Chinese walls in law firms aren't suggestions — they're ethical obligations enforceable by bar disciplinary authorities. AI tools must integrate with conflict management systems and enforce ethical walls at the query level, not just the document level. If an attorney is walled off from a matter, the AI shouldn't acknowledge the matter's existence, let alone surface its documents.
4. Verifiable Outputs
Legal work product needs to be verifiable. Every assertion needs a citation. Every summary needs to be traceable to source documents. Every piece of AI-generated analysis needs to be auditable. This is fundamentally incompatible with the way current LLMs generate text — they produce fluent language, not footnoted research.
5. Granular Audit Trails
Bar associations, courts, and clients are going to start asking how AI was used in their matters. Firms need audit trails that show:
- What prompts were submitted
- What documents were accessed
- What outputs were generated
- Who reviewed and approved AI-generated work product
- When and how AI outputs were modified
6. Professional Responsibility Integration
AI deployment in legal can't be a pure IT decision. It requires integration with the firm's professional responsibility infrastructure — ethics counsel review, CLE training on AI usage, practice group-specific policies, and ongoing monitoring for ethical compliance.
What a Proper AI Deployment for Legal Looks Like
The firms that are getting AI right — and they exist, though they're quieter about it than the firms generating press releases — are building controlled environments rather than deploying general-purpose tools.
Architecture matters. The successful deployments I've seen share common characteristics:
- Private model instances that don't send data to shared cloud endpoints. Azure OpenAI Service with data processing agreements, not the consumer API.
- Document pipeline controls that pre-filter content before it reaches the LLM. Privileged documents go through a privilege-classification layer before they're indexed. Ethical wall violations are blocked at the query layer, not the permission layer.
- Integration with existing legal tech — iManage, NetDocuments, Relativity, Aderant — rather than replacing it. The DMS is the system of record for access controls. The AI tool reads from the DMS permission model, not from SharePoint.
- Human-in-the-loop requirements that are enforced architecturally, not just by policy. AI-generated research must be verified against primary sources before it can be included in work product. The system blocks copy-paste from AI outputs into court filings without an attestation step.
- Comprehensive monitoring that tracks every AI interaction, flags anomalies, and generates compliance reports. Not Microsoft Purview with default settings — purpose-built observability for legal AI usage.
This is harder than deploying Copilot. It's more expensive than $30/user/month. It requires actual engineering work, not just license procurement.
But it's the only approach that respects the regulatory reality of legal practice. Law firms don't get to say "oops" when privilege is waived. They don't get to blame Microsoft when an ethical wall is breached. They don't get to tell a sanctioning court that the AI hallucinated the case citations.
The firms that understand this are building carefully. The firms that don't are going to learn the hard way.
And if you're a managing partner reading this and thinking "we'll just write a policy that says attorneys have to double-check Copilot's output" — I have a bridge to sell you. Policies don't survive contact with billable hour pressure. Architecture does.
The legal industry needs AI tools built for legal workflows, not enterprise productivity tools with legal use cases bolted on as an afterthought. If your firm is evaluating AI deployment, start with your ethical obligations and work backward to the technology — not the other way around.