88% Already Hit. Permissions Are the Root Cause.
Nearly 9 in 10 organizations report AI agent security incidents. The root cause isn't prompt injection or model flaws — it's overly broad permissions.
23 posts tagged with “agents”
Nearly 9 in 10 organizations report AI agent security incidents. The root cause isn't prompt injection or model flaws — it's overly broad permissions.
The execution case and the accountability case are both right. The interesting question is what happens when you put them together.
Static AI guardrails are failing in production. Langflow was exploited within 20 hours. Cline was compromised through a GitHub issue title. Here's what actually works instead.
NIST RA-5, ISO 27001 9.2, DORA, FedRAMP 20x — four major compliance frameworks share the same blind spot: none of them account for AI agents in your environment. Here is what that means and what to do about it.
820 malicious packages. 30,000 exposed instances. Fortune 500 breaches. The AI agent ecosystem has a supply chain problem that traditional AppSec isn't built to catch.
Everyone optimizes the token window. Almost nobody manages the environment. Active context is what your agent thinks about. Latent context is what your agent can reach. The blast radius of a compromised agent is determined by the latter.
Indirect prompt injection has moved from theory to active exploitation. Unit 42 confirms in-the-wild attacks, PleaseFix hijacks AI agents through calendar invites, and a Claude Code CVE exposed 150,000 developers. Here is what security teams need to know.
Nineteen malicious npm packages. Four AI coding tools. Rogue MCP servers injected silently into agent configurations. SANDWORM_MODE is the first documented autonomous supply chain attack targeting AI developer toolchains — and it exposes a structural vulnerability that identity alone cannot fix.
Your AI agents inherit your permissions, your credentials, and your blast radius. NIST just proposed a fix — and the public comment window closes April 2. Here's why identity governance is the layer most agent architectures are missing.
Most teams treat token spend limits as cost management. They are blast radius containment. An autonomous agent with no spending ceiling is not a productivity tool — it is an uncontrolled liability.
Traditional SAST, DAST, and SCA tools were built for request-response architectures. Agent-first systems have vulnerability classes these tools were never designed to detect — and independent research just confirmed it.
Ambiguous specifications aren't just a project management problem anymore. In agent-first architectures, every gap in a spec is a potential security boundary violation — and the agent won't tell you it's guessing.
OWASP released its first Top 10 for Agentic Applications. Here's what each risk means, why traditional AppSec frameworks fall short, and how to start securing your AI agents today.
Every tool an agent can call is an attack surface. In agent-first architectures, the integration layer is the primary security boundary — and most teams aren't treating it that way.
The trap a16z identified for FDE-model startups is identical to the trap facing AI agent deployments.
How the Safe Autonomy framework applies to vulnerability triage, alert correlation, compliance evidence, and security testing. AI agents can multiply your security team—if you build the right guardrails.
Autonomy concentrates responsibility. Here’s why every agent needs a named human owner and runtime governance controls.
The integration surface isn’t an implementation detail — it’s the boundary that determines what autonomy can safely do.
Same model, different outcomes: why supervision, specialization, triage, and long-horizon context management matter more than prompt cleverness.
Why “show your work” is a control surface, not a UX detail — and how evidence packets, confidence bands, and verification gates prevent the false confidence tax.
A case-study-driven series on what changes when agents operate in messy reality: reveal, structure, interfaces, and human accountability.
Why enterprises don't fear autonomous AI — they fear unowned action. A look at why human accountability becomes more essential, not less, as agents grow more capable.
A lightweight, enterprise-grade framework for designing safe, predictable, auditable agentic systems. Learn how Role, Objectives, Boundaries, Observability, and Taskflow turn ad-hoc automation into reliable operational workflows.