The Microsoft Graph API Permission Model: What Every IT Leader Must Understand Before Deploying AI
Here's a scenario I see every month. A company deploys Copilot for Microsoft 365. Within the first week, someone in marketing asks Copilot to summarize recent project updates and gets back confidential HR documents. Someone in engineering asks for help with a proposal and gets a draft that includes salary data from a spreadsheet they shouldn't have access to.
The IT team panics. "Copilot is leaking data." They consider rolling back the entire deployment.
But Copilot didn't leak anything. It accessed exactly the content that user already had permission to see. The user just never knew they had access to those files — because they'd never navigated to that SharePoint site, never clicked into that folder, never had a reason to look. The permissions were wrong long before AI arrived.
This is the permission debt problem, and it lives inside the Microsoft Graph API permission model. If you're deploying any AI on Microsoft 365, this is the most important technical concept you need to understand. Not because it's complicated — it's not, once someone explains it properly — but because getting it wrong torpedoes your entire AI strategy.
What Is Microsoft Graph, and Why Does It Matter?
Microsoft Graph is the unified API that sits in front of virtually everything in Microsoft 365. Exchange email, SharePoint files, Teams messages, OneDrive documents, Planner tasks, Azure AD user profiles — Graph is the single doorway to all of it.
When Copilot answers a question, it's querying Graph. When a custom AI agent retrieves documents, it's calling Graph. When Power Automate triggers a workflow, it's going through Graph. Every AI tool in the M365 ecosystem ultimately asks Graph for data, and Graph decides what to return based on permissions.
This makes the Graph permission model the de facto access control layer for all M365 AI. Get it right, and AI becomes a productivity multiplier. Get it wrong, and you've built a system that efficiently surfaces content people shouldn't see.
The Two Permission Types: Delegated vs. Application
This is where most IT leaders' eyes glaze over, and it's exactly where they can't afford to lose focus.
Delegated Permissions
Delegated permissions work on behalf of a signed-in user. The app can only access what the user can access. Think of it as the app borrowing the user's badge to walk through doors.
How it works in practice:
- User signs into an app (say, Copilot)
- The app requests delegated permissions (e.g.,
Sites.Read.All) - An admin (or the user, depending on consent settings) approves
- The app can now access SharePoint sites — but only the ones that specific user has access to
The critical point: Sites.Read.All as a delegated permission does NOT mean the app can read all sites in your tenant. It means the app can read any site that the current user can read. The .All refers to the scope of the permission type, not the scope of access. This trips up everyone.
When a user asks Copilot "What are the latest project updates?", Copilot uses delegated permissions to search across the user's accessible content. It returns results based on what that user is already authorized to see. Copilot doesn't escalate access. It exercises existing access more efficiently.
This is how Copilot works. It inherits the user's permission context entirely. There is no separate "Copilot permission" layer. If a user can access a SharePoint site through the browser, Copilot can surface content from that site. If they can't, Copilot won't.
Application Permissions
Application permissions are the ones that should make you nervous — or at least attentive. These don't act on behalf of a user. They act as the application itself, with tenant-wide access.
How it works:
- An app registration is created in Azure AD
- Application permissions are assigned (e.g.,
Sites.Read.All) - A tenant admin grants consent
- The app can now access ALL SharePoint sites, regardless of which user (if any) is involved
Sites.Read.All as an application permission literally means all sites. Every SharePoint site in your tenant. This is the version that security teams rightfully scrutinize.
When you need application permissions:
- Background processing jobs that run without a user context
- Automated workflows that need to access content across the organization
- Service accounts for integration platforms
- Custom AI agents that process documents organization-wide
The problem: Most custom AI agent deployments start with application permissions because they're easier to implement. The developer doesn't have to deal with user authentication flows. The agent "just works." But it works with tenant-wide access, which means a misconfigured agent can read every document in your organization.
The Permission Hierarchy Most People Don't Understand
SharePoint (which stores the majority of M365 content) has a layered permission model:
Tenant
└── Site Collection
└── Site
└── Document Library
└── Folder
└── Item (file)
Permissions can be set at any level and can be inherited or broken. Here's where it gets messy:
Default behavior: When you create a new SharePoint site, it inherits a set of default permissions. Anyone in the "Everyone except external users" group often gets read access. This is a Microsoft default that many organizations never change.
Broken inheritance: You can break permission inheritance at any level — set unique permissions on a specific folder or file. This is powerful and also the source of most permission chaos. Over time, you end up with sites where the top level has one set of permissions, three folders have different permissions, and seventeen individual files have been shared with specific people via sharing links.
Sharing links: The modern sharing experience in SharePoint and OneDrive creates sharing links that can be:
- Anyone links (anonymous access — anyone with the link)
- People in your organization (any authenticated user in your tenant)
- Specific people (named individuals)
Every one of these creates an access path that most permission audits miss. An executive shares a sensitive budget file with "people in your organization" for a quick review, forgets to revoke the link, and now all 5,000 employees technically have read access to next year's budget projections.
AI doesn't care that nobody remembers the link exists. It indexes everything the user can access, including content reachable through forgotten sharing links.
Why "Sites.Read.All" Sounds Scary But Is Necessary
Let me address this directly, because it comes up in every security review.
When you deploy Copilot or build a custom agent that needs to search across a user's SharePoint content, the application needs the Sites.Read.All delegated permission. There is no Sites.Read.Some permission. There is no Sites.Read.OnlyTheOnesIApprove permission.
Microsoft designed the permission model this way because the access filtering happens at the data layer, not the permission layer. The delegated permission says "this app can make requests to the SharePoint API." The SharePoint access control layer says "this user can see these specific sites and files."
It's a two-gate system:
- Gate 1 (Graph permission): Does this app have the right to call this API? (
Sites.Read.All= yes, it can call the Sites API) - Gate 2 (SharePoint ACL): Does this user have access to this specific content? (Determined by site permissions, sharing links, group membership, etc.)
Both gates must open for data to flow. The Graph permission without user access returns nothing. User access without the Graph permission means the app can't even make the call.
So when your security team sees Sites.Read.All in the consent prompt and raises a flag — the correct response isn't to block the permission. It's to ensure Gate 2 is properly configured. Which means your SharePoint permissions need to be right.
The Permission Debt Problem
Here's the uncomfortable reality I find in nearly every M365 tenant I assess:
Permission debt is universal. Over years of organic SharePoint usage, permissions accumulate like technical debt in code. Nobody cleans them up because nobody sees the impact. Files get shared, links get created, groups get added, employees leave but their sharing links persist, and the effective permission state drifts further and further from what anyone intended.
The numbers are stark. In a typical 1,000-user M365 tenant, I regularly find:
- 40–60% of SharePoint sites have overly broad permissions (Everyone or Everyone except external users)
- 15–25% of shared content has anonymous sharing links that are still active
- 30%+ of files shared with specific individuals are accessible by people who've changed roles or left the organization
- External sharing is enabled on sites where it shouldn't be, sometimes with files shared to personal email addresses
None of this mattered before AI. Or rather, it mattered but the risk was theoretical. The probability that an employee would stumble across a misfiled HR document in a SharePoint site they didn't know they had access to was low. They'd have to navigate to the site, find the library, open the folder, and recognize the file.
AI collapses that probability to near certainty. AI doesn't browse. It searches everything, retrieves everything, and presents everything — with zero navigation friction. Permission debt becomes permission exposure the moment you deploy an AI agent with search capabilities.
Copilot's Permission Model: What Actually Happens
When a user interacts with Copilot in Microsoft 365, here's the actual flow:
- User asks a question ("Summarize the Q4 board deck")
- Copilot authenticates as the user via delegated permissions
- Microsoft Search indexes are queried — these indexes respect SharePoint permissions
- Results are filtered by the user's effective access
- Copilot synthesizes an answer from the returned content
- The answer inherits the sensitivity of the source content (if sensitivity labels are applied)
Key implications:
- Copilot cannot access content the user can't access. This is architecturally enforced, not policy-based.
- Copilot CAN access content the user forgot they could access. This is the core risk.
- Sensitivity labels propagate. If source content is labeled "Confidential," Copilot's response inherits that label.
- Search index latency matters. Recently permissioned or de-permissioned content may not immediately reflect in Copilot responses.
Practical Steps to Audit and Fix Your Permissions
If you're reading this and feeling a knot in your stomach, good. That's the appropriate reaction. Here's what to do about it.
Step 1: Get Visibility (Week 1)
You can't fix what you can't see. Start with an audit.
Microsoft Purview provides some built-in reporting on sharing and permissions, but it's limited and often overwhelming. The SharePoint Admin Center shows site-level permissions but not the broken inheritance or sharing link chaos beneath.
What you actually need:
- Site-by-site permission mapping (who has access to what, and how)
- Sharing link inventory (every active link, its type, and when it was created)
- External access report (what's shared outside your tenant)
- Oversharing detection (content accessible to "Everyone" that shouldn't be)
This is exactly why we built our M365 Permission Scanner. Manual auditing of a 500-site SharePoint environment would take a dedicated admin weeks. Automated scanning takes hours and catches things manual audits miss.
Step 2: Fix the Critical Issues (Weeks 2–3)
Prioritize based on risk:
- Revoke anonymous sharing links on sensitive content. These are the highest risk — anyone with the URL has access, no authentication required.
- Remove "Everyone except external users" from sites containing sensitive content. Replace with explicit security groups.
- Review external sharing. Identify content shared with external users and validate each share is still needed.
- Clean up orphaned permissions. Remove access for departed employees, former contractors, and dissolved teams.
Step 3: Establish Governance (Weeks 3–4)
Fixing the current state is pointless if the same problems reaccumulate.
- Set default sharing link type to "Specific people" (not "People in your organization")
- Disable anonymous sharing links unless there's a specific business need
- Implement site classification — label sites by sensitivity level and apply appropriate permission templates
- Enable access reviews — Azure AD access reviews can periodically prompt site owners to validate who has access
- Apply sensitivity labels to high-value content — these travel with the content and enforce protection regardless of where it's stored or shared
Step 4: Test Before You Deploy AI (Week 4)
Before rolling out Copilot or any AI agent:
- Pick 5 users from different departments
- Log in as each user (or use the Microsoft Graph Explorer with delegated context)
- Search for content they shouldn't see — HR documents, executive strategy decks, salary data, legal files
- If they can find it, so can Copilot
This is a simple but brutally effective test. If your CFO's compensation analysis shows up in a search by a marketing coordinator, you have a permissions problem that AI will amplify.
Step 5: Monitor Continuously
Permissions drift. New sites get created with default (overly broad) permissions. New sharing links get created daily. Employees change roles. This isn't a one-time fix — it's an ongoing practice.
Set up recurring scans (monthly at minimum) to detect permission drift. Alert on new anonymous sharing links, new external shares, and new "Everyone" permissions on sensitive sites.
The Bigger Picture: Why This Is Actually Good News
I know this article reads like a horror story about permissions. But here's the reframe: AI deployment is the forcing function for data governance that organizations have needed for years.
Every CISO has been saying "we need to clean up SharePoint permissions" for a decade. Nobody listened because the risk was abstract. Now it's concrete: deploy AI on a messy environment, and confidential data surfaces in the first week. That's a boardroom conversation.
The organizations that invest in fixing their permission model before deploying AI don't just get a safer AI deployment. They get:
- Reduced compliance risk across the board (not just AI)
- Cleaner data architecture that benefits every employee, every day
- Faster incident response when you know who has access to what
- Audit readiness for SOC2, ISO 27001, GDPR, and every other framework that cares about access control
AI isn't creating the permission problem. It's illuminating it. And the fix benefits everything, not just AI.
The Technical Foundation for Everything Else
If you've made it this far, you understand why we built our M365 environment scanner before building anything else. Permissions are the foundation. Every AI agent, every Copilot deployment, every custom automation — they all inherit the permission state of your M365 tenant.
Get the permissions right, and AI deployment becomes dramatically simpler, safer, and more successful. Skip this step, and every other investment — the licenses, the agents, the training — underperforms or actively creates risk.
This isn't the exciting part of AI deployment. It's the part that determines whether the exciting parts work.
E2E Agentic Bridge provides automated M365 permission scanning and remediation to prepare your environment for AI deployment. Our scanner identifies oversharing, permission debt, and governance gaps before they become AI risk. Start your assessment today.