← Back to Blog

88% Already Hit. Permissions Are the Root Cause.

7 min readAtypical Tech
Illustration for 88% Already Hit. Permissions Are the Root Cause.

Gravitee's State of AI Agent Security research dropped this week with a number that should end every remaining argument about whether AI agent security is a real problem: 88% of organizations experienced confirmed or suspected AI agent security or privacy incidents in the past year.

Eighty-eight percent. Not "could be vulnerable." Not "theoretically at risk." Already hit.

And the root cause wasn't prompt injection. It wasn't model hallucination. It wasn't adversarial inputs or jailbreaks. It was permissions. Agents inheriting admin-level access. Agents holding overly permissive API credentials. Agents reaching datasets and systems far beyond their intended scope — at machine speed.

The most dangerous thing about your AI agent isn't what it says. It's what it can reach.


The permission model is broken

When you give a human employee access to a system, they log in, navigate to what they need, and do their work. They might have broad permissions on paper, but in practice, they use a fraction of their access. They're slow. They're contextual. They self-limit.

AI agents don't self-limit. An agent with read access to your database will read every table. An agent with API credentials will call every endpoint. An agent with filesystem access will traverse every directory. Not because it's malicious — because that's what agents do. They operate at the full extent of their permissions, at machine speed, without the social friction that constrains human behavior.

This is why traditional IAM assumptions fail for non-human actors. Human IAM works because humans are slow and contextual. Agent IAM fails because agents are fast and literal.

EY's 2026 AI Sentiment Report — 18,000 respondents across 23 markets — found that 16% of organizations are already running autonomous AI systems in production. Not sandboxed. Not experimental. Autonomous agents making purchases, managing banking, processing transactions. The adoption is real. The permission models governing that adoption are not.

We deployed agents at production speed. We left permissions at prototype scope.


Exhibit A: The frameworks that expose your secrets

Security researchers disclosed three vulnerabilities in LangChain and LangGraph this week — frameworks that underpin millions of AI agent deployments. The worst, CVE-2025-67644 (CVSS 7.3), is an SQL injection flaw in LangGraph's SQLite checkpoint implementation.

Through these flaws, attackers can extract filesystem data, environment secrets, Docker configurations, and conversation histories from running AI applications. The common thread: unauthenticated endpoints with access to everything the agent can see.

The vulnerability isn't in the model. It's in what the framework can reach. LangGraph's checkpoint system has access to the full application context because it needs that access to do its job — save and restore agent state. The problem is that nothing scopes that access. No least-privilege boundary separates the checkpoint system from the secrets sitting in the same environment.

One SQL injection, and an attacker has your API keys, database credentials, and every conversation your agent has ever had.


Exhibit B: When security tools have root

Security researcher Richard Fan found five vulnerabilities in AWS Security Agent — an autonomous AI pentesting tool designed to find security flaws.

The irony writes itself. A tool built to find vulnerabilities contained five of its own — including DNS confusion bugs that trick the agent into pentesting unauthorized domains, command injection via debug messages enabling reverse shell access with root privileges, container escape through mounted Docker sockets, and access to host EC2 IAM credentials.

AWS Security Agent has broad system access because it needs broad system access to do penetration testing. The design requirement and the security vulnerability are the same thing: overly broad permissions granted to an autonomous agent.

The agent needed root to do its job. The attacker needed root to do theirs. Same permission, different intent.

This isn't an AWS-specific problem. Every autonomous security tool, every AI-powered scanner, every agent-driven workflow that "needs" elevated access creates the same pattern. Broad permissions designed for the agent's legitimate function become the attacker's lateral movement path.


Exhibit C: Zero-click token theft

Security researcher Oren Yomtov disclosed a zero-click vulnerability in Anthropic's Claude Chrome Extension that chained an overly permissive origin allowlist with DOM-based XSS. The result: attackers could inject malicious prompts via hidden iframes, steal access tokens, and exfiltrate conversation history — without any user interaction.

The extension had permissions to read page content, access Claude's API tokens, and store conversation data. It needed those permissions to function. But the combination of broad permissions and a single XSS flaw turned a productivity tool into a credential harvesting platform.

No prompt injection. No model exploitation. Just a browser extension with too much access and one exploitable flaw.


The pattern

Every major AI agent security incident follows the same structure:

  1. An agent is granted broad permissions because it "needs" them to function
  2. A vulnerability — any vulnerability — provides an entry point
  3. The attacker inherits the agent's full permission scope
  4. The blast radius is determined not by the vulnerability's severity, but by the agent's access

The vulnerability is the door. Permissions are the blast radius. We keep building stronger doors while handing out keys to the entire building.

You can't firewall your way out of a permissions problem. The agent is already inside the perimeter.

And the scale is accelerating. The Model Context Protocol has hit 97 million installs, standardizing how agents connect to tools and data sources. Standardization is good for interoperability. It's also good for attackers — a protocol-level permission flaw propagates across every MCP-compatible deployment simultaneously.


What actually works

The fix isn't better guardrails or smarter filters. It's a fundamentally different permission model for non-human actors.

Allowlist, not denylist. Don't start with everything and remove what's dangerous. Start with nothing and add only what's required. Every credential, every API scope, every filesystem path — explicitly granted, not inherited.

Scope to task, not to role. Human IAM assigns permissions by role. Agent IAM should assign permissions by task. A summarization agent doesn't need database write access. A code review agent doesn't need production credentials. A monitoring agent doesn't need the ability to modify what it monitors.

Rotate aggressively. Agent credentials should be short-lived and task-scoped. If an agent needs database access for a 30-second query, it gets a 60-second token — not a persistent connection string sitting in an environment variable.

Monitor the actual access pattern. Agents are predictable. A well-scoped agent accesses the same resources in the same patterns. Any deviation from that pattern — a summarization agent querying credential stores, a monitoring agent writing to production — is a signal.

Assume compromise. Design the permission model so that when an agent is compromised — not if — the blast radius is contained to that agent's specific, minimal scope. If losing one agent means losing one scoped task, you have a security incident. If losing one agent means losing your cloud credentials, database access, and conversation history, you have a catastrophe.


The 88% number isn't going down. More agents are being deployed every week, most with the same inherited, overly broad permissions that created the problem in the first place.

The question isn't whether your AI agents will be targeted. It's how much damage the attacker can do when they get in — and that's determined entirely by what your agents can reach.

Scope the permissions. Before someone else decides what they're for.


Evaluate your agent permission model against production-ready criteria. The Safe Autonomy Readiness Checklist covers 43 items across 8 sections — including least-privilege controls and credential management.


If you're deploying AI agents and aren't sure what they can actually access, we should talk. We audit agent architectures and build permission models that assume compromise from day one.

Contact Atypical Tech

Related Posts