Why We Failed Our Agent-Readiness Audit on Purpose

We ran atypicaltech.dev through an automated agent-readiness audit last week. It returned nine recommendations. We shipped three of them.
The other six, we refused. On purpose. If you run the same audit today, our "score" is lower than a site that stubbed every endpoint the checker looks for. That's the point.
Agent-readiness isn't a checkbox count. It's a trust claim.
What an "agent-ready" audit actually checks
The tool walks through a growing list of agent-discovery standards — some RFCs, some IETF drafts, some emerging proposals — and flags the ones your site doesn't publish. Among them:
- Link response headers pointing agents at catalogs and docs (RFC 8288)
- Content Signals in
robots.txtdeclaring AI usage preferences - Markdown content negotiation so
Accept: text/markdownreturns a markdown body instead of HTML /.well-known/api-cataloglisting your public APIs (RFC 9727)/.well-known/openid-configurationfor OAuth/OIDC discovery (RFC 8414)/.well-known/oauth-protected-resourceso agents can find the right authorization server (RFC 9728)/.well-known/mcp/server-card.jsonadvertising an MCP server/.well-known/agent-skills/index.jsonlisting skills agents can invoke- WebMCP exposing browser-side tools via
navigator.modelContext
At first glance this looks like a to-do list. It isn't. It's a menu of trust claims.
Every
/.well-known/URI you publish is a promise about the service behind it.
The three we shipped
Three of the recommendations are genuinely useful and backed by real resources we already had. These went live today.
Content Signals in robots.txt. We already block AI-training crawlers (GPTBot, CCBot, anthropic-ai, and the rest). The new Content-Signal: directive makes our preferences explicit in machine-readable form — ai-train=no, search=yes, ai-input=no. It's a complement to the existing Disallow rules, not a replacement. One line of declarative config; no runtime risk.
Link headers. We added a Link: response header on every page pointing to two resources that already exist: /llms.txt (our llmstxt.org-format description for language models) and /sitemap.xml. No headers pointing to things we don't have. No rel="service-doc" → /docs/api when there's no /docs/api.
Markdown content negotiation. When an agent sends Accept: text/markdown to any of our blog posts, it now gets the raw markdown body — frontmatter metadata up top, content underneath — instead of a rendered HTML page it has to parse. Browsers and humans still get HTML by default. It's the most genuinely useful of the three, because LLMs researching our writing get clean context instead of scraped DOM.
The good agent-readiness signals make life easier for the readers you want. The bad ones make claims to readers you'd rather not hear from.
The six we refused
Here's where signal honesty matters. The remaining recommendations all presume services this site doesn't run. Publishing stubs would be worse than publishing nothing.
/.well-known/openid-configuration — a credential-phish vector
This one is the most serious. /.well-known/openid-configuration tells OAuth relying parties: here is the identity provider, here is the JWKS endpoint, here is where to exchange authorization codes for tokens. If we publish this at atypicaltech.dev with fake values, we are advertising ourselves as an OIDC issuer we are not.
Do that as a security consulting firm and you've created a credential-phish surface with your own domain name. An attacker points a relying party at your fake issuer and you've handed them a spoofing tool with CISSP credentials stapled to the trust claim.
Publishing fake OAuth discovery is not a missing-feature. It's a live vulnerability you pushed to production.
We're not an identity provider. We don't issue tokens. We don't have a JWKS. That endpoint stays 404 on purpose.
/.well-known/api-catalog — advertising inventory we don't have
The audit flagged us for not publishing a RFC 9727 linkset of our public APIs. We have no public APIs. The routes under /app/api/* are form handlers (contact, newsletter, demo request) and three n8n webhook proxies. None of these are catalog-worthy resources, and none should be advertised as if they were.
Publishing an empty or near-empty linkset would invite enumeration, suggest service inventory we don't maintain, and plant the seed for future drift between the catalog and reality. None of that is worth the audit-score point.
/.well-known/oauth-protected-resource — same problem, different endpoint
RFC 9728 metadata tells agents which authorization server issues tokens for a given resource. We have no protected resource to advertise. We have no corresponding authorization server. Same class of lie as the OIDC discovery above.
/.well-known/mcp/server-card.json — no MCP server exists on this domain
The MCP Server Card proposal is genuinely interesting. When we have an MCP offering worth advertising — a ROBOT Framework query server, a safe-autonomy-checklist MCP, something real — we'll ship the card alongside the server. Not before.
Discovery documents are for things that exist. They are not an application.
Why this matters more than the three we shipped
If you've read Specification as Attack Surface, you already know the core idea: ambiguous or incorrect specifications don't just fail — they fail silently, and in production. The /.well-known/ URI tree is a specification. Publishing stub endpoints for services that don't exist is not a speedbump for agents; it's a specification bug with a security impact.
It also violates the first principle of the ROBOT Framework: advertised capability must equal actual capability. An agent can't reason correctly about a system whose self-description is a lie. Humans can't either — that's why Interfaces Define Capability and Risk and why Identity Is the Missing Layer matter.
You can't bolt "agent-ready" onto a system whose published interface disagrees with its implementation.
The automated audit isn't wrong about what the standards are. It's wrong about what it means to comply with them. A site that publishes all nine with half of them fake scores higher than one that publishes three real ones. The scoring is optimizing the wrong thing.
What "LATER" looks like
Two of the six aren't a hard "no" — they're a "not yet."
- MCP Server Card ships when we publish an actual MCP server.
- Agent Skills discovery index ships when we open-source the skills that live inside our private atypicaltech-toolkit marketplace.
Both would be on-brand when the backing service is real. Both would be theater shipped today. That's the distinction.
How to audit yours the honest way
If you're looking at the same audit tool and wondering which of its recommendations to ship, a short filter:
- Does the resource it would point to already exist? If no, don't add the pointer.
- Is the backing service running on this domain, right now, today? If no, don't publish the discovery document.
- Is the standard ratified, or a draft you're comfortable tracking? If draft, explicitly document the version you're targeting in the file or in a comment.
- Would a security-conscious reader interpret this endpoint as a trust claim? If yes, treat it as one — and only ship it if the claim is true.
Apply those and the decision tree collapses fast. You'll ship less than the audit asks. That's usually the right answer.
"We refused to publish this" is a more trustworthy agent-readiness signal than "we published this."
If your team is staring at an agent-readiness checklist and feeling the pressure to ship all of it — the right reaction is to read each item as a trust claim and ask whether the claim is true today. We help engineering teams draw that line. Start a conversation and we'll walk through which of the emerging agent-discovery standards actually apply to your stack — and which would just be theater.
Related Posts
The AppSec Acceleration: Why Your Security Tools Can't See Agent Vulnerabilities
Traditional SAST, DAST, and SCA tools were built for request-response architectures. Agent-first systems have vulnerability classes these tools were never designed to detect — and independent research just confirmed it.
Specification as Attack Surface: Why Ambiguity Is a Vulnerability in Agent-First Architectures
Ambiguous specifications aren't just a project management problem anymore. In agent-first architectures, every gap in a spec is a potential security boundary violation — and the agent won't tell you it's guessing.
The OWASP Top 10 for Agentic AI Is Here — What It Means for Your Deployment
OWASP released its first Top 10 for Agentic Applications. Here's what each risk means, why traditional AppSec frameworks fall short, and how to start securing your AI agents today.