Your Compliance Assessment Does Not Cover AI Agents

It is Q1 audit season. NIST RA-5 vulnerability assessments are cycling. ISO 27001 Clause 9.2 internal audits are closing out. DORA's Register of Information submissions are due to European regulators by the end of March. FedRAMP 20x is reshaping how continuous monitoring works going forward.
If your organization runs AI agents in production — coding assistants, workflow automation, autonomous tooling — you have a gap that none of these frameworks were designed to catch.
The gap nobody is assessing
Every one of these frameworks requires you to inventory your systems, assess their risks, and document your controls. They are well-understood for traditional software: web applications, databases, APIs, cloud infrastructure.
AI agents do not fit the model.
An AI coding assistant does not behave like a traditional application. It reads and writes files, executes commands, makes network requests, and accesses credentials — all based on natural language input that traditional WAFs cannot inspect. A workflow automation agent calls external APIs, processes documents, and makes decisions with latent access to every integration in its environment.
When your auditor asks "what systems have access to production credentials?" and your answer does not include the AI agent that can read your .env file, your assessment has a gap.
When your vulnerability scan covers every endpoint in your network but does not account for the agent that accepts unvalidated input from a chat interface, your scan is incomplete.
The question is not whether your compliance framework applies to AI agents. It is whether your assessment of that framework accounts for what those agents can actually do.
What the frameworks expect vs. what agents do
NIST RA-5 requires organizations to scan for vulnerabilities at a defined frequency and remediate based on risk. But AI agents introduce a class of vulnerability — prompt injection, tool misuse, credential exposure through context windows — that traditional scanners do not detect. Your Nessus scan will not find the fact that your AI coding assistant subprocess inherits 12 environment variables including your database connection string.
ISO 27001 Clause 9.2 requires internal audits to verify that your ISMS is effectively implemented. If your ISMS does not include AI agent behavior in its scope — what they access, what they can modify, how their inputs are validated — then the audit is verifying an incomplete picture.
DORA mandates that financial entities identify and classify all ICT assets and their risks. An AI agent running in your development pipeline is an ICT asset. If it is not in your asset inventory, it is not in your risk assessment. If it is not in your risk assessment, your DORA submission has a gap.
FedRAMP 20x moves toward continuous monitoring with automated evidence collection. Agents that operate autonomously between monitoring intervals can take actions that are not captured by your existing telemetry — unless you have deliberately instrumented them.
This is not a checklist problem
The temptation is to treat this as a documentation exercise. Add "AI agents" to your asset inventory, write a control statement, check the box.
That misses the point.
The reason AI agents create compliance gaps is not that they are missing from your spreadsheet. It is that they operate with a fundamentally different risk profile than the systems your controls were designed for. They accept natural language input that bypasses traditional input validation. They make autonomous decisions about tool use. They inherit access permissions from their runtime environment rather than being explicitly granted them.
Documenting their existence without understanding their behavior is the compliance equivalent of listing a server in your inventory without ever scanning it.
What to do before your next audit
If you have AI agents in production — and if your engineering team uses AI coding assistants, you do — here is what closes the gap:
1. Inventory agent access, not just agent existence. For every AI agent in your environment, document what it can reach: file systems, network endpoints, credentials, APIs, databases. This is your agent's latent context, and it defines your blast radius.
2. Assess agent-specific vulnerabilities. Prompt injection, tool misuse, credential exposure through context, and supply chain risks from plugins or extensions. These are the agent-specific vulnerability classes that traditional scanners miss.
3. Instrument agent behavior. Log what your agents do, not just what they are asked to do. Input prompts, tool calls, file operations, network requests. If your agent takes an action and your SIEM does not know about it, your continuous monitoring has a gap.
4. Scope your controls to match agent capabilities. If an agent can execute arbitrary commands, your controls need to account for that. Network policies, egress restrictions, credential scoping, and output filtering are not optional — they are the minimum for an agent that operates at the trust level most organizations grant their AI tools.
Closing the assessment gap
The gap between what your compliance frameworks require and what your AI agents actually do is something you can close. The frameworks themselves are not the problem — their scope definitions are. Extending that scope to include agent behavior, agent access, and agent-specific risk is the work that most organizations have not done yet.
This is the kind of assessment we run in our Security Architecture Reviews. If your next audit does not account for AI agents, it is incomplete — and if your auditor knows what to look for, they will find it.
Talk to us about closing the gap →
References
Related Posts
The OWASP Top 10 for Agentic AI Is Here — What It Means for Your Deployment
OWASP released its first Top 10 for Agentic Applications. Here's what each risk means, why traditional AppSec frameworks fall short, and how to start securing your AI agents today.
Your Agent's Real Attack Surface Isn't Its Prompt
Everyone optimizes the token window. Almost nobody manages the environment. Active context is what your agent thinks about. Latent context is what your agent can reach. The blast radius of a compromised agent is determined by the latter.
Identity Is the Missing Layer for AI Agents
Your AI agents inherit your permissions, your credentials, and your blast radius. NIST just proposed a fix — and the public comment window closes April 2. Here's why identity governance is the layer most agent architectures are missing.