How We Work: Security Engagements

Most security consulting is a black box. You sign an SOW, wait a few weeks, and get a PDF with 200 findings sorted by CVSS score. Half are irrelevant to your architecture. The other half you already knew about but hadn't prioritized.
Then nothing changes. According to Cobalt's 2025 State of Pentesting Report, organizations remediate only 48% of pentest findings. The other 52% are never fixed. The median time to resolve even the findings that do get addressed is 67 days — despite 75% of organizations claiming SLAs of 14 days.
The problem isn't the findings. It's the process that produces them.
We work differently. This post describes what a security engagement with Atypical Tech actually looks like — the structure, the deliverables, the decisions you'll make, and what you'll have when it's done.
No mystery process. No overhead theater. No findings that sit in a spreadsheet.
Who this is for
Security engagements make sense when you have a specific need and a finite timeline:
- Pre-fundraise teams who know investors will ask about security posture and want real answers
- Startups approaching enterprise sales where security questionnaires and SOC 2 readiness become deal blockers — 74% of enterprise buyers require SOC 2 before engaging with a vendor
- Engineering teams without dedicated security staff who need expert review without a full-time hire — the ISC2 2024 Workforce Study puts the global cybersecurity talent gap at 4.8 million, with 90% of teams reporting skills gaps
- Companies deploying AI agents or automation who need to understand the new attack surface before it's in production
If you need ongoing monitoring rather than a focused engagement, the Security Health Monitor is built for that.
The structure
Every engagement follows the same four phases. The scope varies — some engagements are two weeks, some are six — but the structure doesn't change.
Phase 1: Scope and threat model (Week 1)
This is where we figure out what actually matters for your specific situation. Not a generic checklist — a threat model built from your architecture, your business context, and your actual risk profile.
What happens:
- Discovery call: what are you building, who uses it, what data flows through it, what keeps you up at night
- Architecture walkthrough: we read your code, your infrastructure config, your deployment pipeline
- Trust boundary mapping: where does trusted meet untrusted? Where do privilege levels change?
- Threat model: what are the realistic attack scenarios given your architecture and threat landscape?
What you get:
- A trust boundary diagram you can put on a whiteboard
- A prioritized threat model — not exhaustive, but focused on what could actually hurt you
- Agreed scope for the rest of the engagement
We don't start reviewing code until we understand what we're protecting and from whom. A SQL injection in a public-facing payment endpoint is a different severity than the same flaw in an internal admin tool behind a VPN. Context determines priority.
There's a reason OWASP added Insecure Design as #4 in the Top Ten — and why NIST's Secure Software Development Framework makes threat modeling an explicit practice. IBM and Ponemon data consistently show that fixing a defect in production costs up to 100x more than catching it during design. The threat model isn't a warm-up exercise. It's where the highest-leverage decisions happen.
The threat model is the engagement. Everything else follows from it.
Phase 2: Review (Weeks 2–3)
This is the hands-on work. What we review depends on what the threat model surfaced, but typically covers some combination of:
Application security:
- Authentication and session management
- Authorization logic and access control
- Input handling and injection vectors
- API design and data exposure
- Dependency analysis (SCA with exploitability context, not just CVE counts — only 1–2% of published CVEs are exploited in any given year, so raw counts are noise)
Infrastructure and deployment:
- Cloud configuration (IAM, network, storage, logging)
- CI/CD pipeline security (secrets handling, artifact integrity, deployment credentials)
- Container and runtime security
- Secrets management and rotation
For teams deploying agents or automation:
- Tool and API access inventory — every integration is an attack surface
- Credential scope vs. role scope — are agents over-privileged?
- Output controls — what can the agent expose, and to whom?
- Boundary enforcement — what happens when the agent hits an edge case?
This maps directly to the ROBOT framework: Role-scoped access, defined Objectives, enforced Boundaries, comprehensive Observability, controlled Taskflow. If you've read The Interface Security Imperative, you know why the integration layer matters.
How we work during this phase:
- We review in your environment — your repo, your cloud, your tools. Not a sanitized export.
- Findings are documented as we go, not saved for a big reveal.
- If we find something critical, you hear about it that day. Not in the final report.
Critical findings don't wait for the PDF.
Phase 3: Deliverables (Week 4)
The output is not a 200-page PDF sorted by CVSS score. CVSS alone is a poor prioritization tool — FIRST's EPSS research shows that organizations patching everything scored CVSS 7+ face a 97.7% false positive rate relative to actual exploitation. Meanwhile, 14% of exploited CVEs had low or medium CVSS scores that would have deprioritized them entirely.
Our deliverables are designed to drive action, not collect dust.
What you get:
-
Findings report — Each finding includes:
- What we found and where
- Why it matters in your specific context (not generic CVSS hand-waving)
- Concrete remediation steps your team can execute
- Priority based on your threat model, not abstract severity
-
Architecture recommendations — Structural improvements that address root causes, not just symptoms. If the same class of vulnerability appears in five places, the fix isn't five patches — it's a design change.
-
Remediation roadmap — Sequenced by impact and effort. What to fix this sprint, what to schedule for next quarter, what to accept with documentation.
-
Compliance mapping (when relevant) — How findings map to SOC 2, HIPAA, or whatever framework your customers or investors care about. We don't do compliance theater, but if a finding has compliance implications, we'll tell you.
What you won't get:
- Findings you can't act on
- Generic severity ratings disconnected from your context
- A report that requires a consultant to interpret
Phase 4: Walkthrough and handoff
We walk through every finding with your team. Not a presentation — a working session where engineers can ask questions, challenge priorities, and plan remediation.
If something in the report doesn't make sense, we fix the report. If your team disagrees with a priority, we discuss the tradeoff. The goal is that your team owns the outcome, not that they depend on us to interpret it.
After the walkthrough:
- You have everything you need to remediate without further consulting hours
- We're available for questions during remediation (included, not billed separately)
- If you want us to verify fixes, we can do a targeted re-test
What this looks like: a real engagement shape
A Series A SaaS company. Django backend, React frontend, AWS infrastructure. Twelve engineers, no security hire. Enterprise prospects are asking for SOC 2 compliance evidence.
Week 1: Discovery and threat model. Mapped trust boundaries across the application, identified that the API layer had no rate limiting and session tokens never expired. Threat model focused on the three paths an attacker would most likely take given the architecture.
Week 2: Application review. Found auth bypass in the admin invitation flow, IDOR on the reporting API, and overly permissive S3 bucket policies. Also found that their dependency scanner was running but nobody was triaging the output — 400 findings, 6 that mattered.
Week 3: Infrastructure and pipeline review. CI/CD secrets stored in environment variables accessible to all team members. No audit logging on the cloud provider. Deployment credentials had full admin access rather than scoped permissions.
Week 4: Deliverables and walkthrough. 14 findings total. 3 critical (auth bypass, IDOR, S3), 5 high, 6 medium. Remediation roadmap: critical fixes this sprint, high fixes this quarter, medium fixes as part of normal development. Compliance mapping showed 8 of 14 findings directly relevant to SOC 2 controls they'd need.
Result: Critical fixes shipped in 10 days. The team used the roadmap to plan their next two quarters of security work. SOC 2 readiness audit passed four months later.
For context: Edgescan's 2025 report puts the industry mean time to remediate critical vulnerabilities at 65 days. Bitsight found that over 60% of even CISA Known Exploited Vulnerabilities miss their mandated remediation deadlines. Ten days from report to critical fixes is what happens when findings come with context, not just severity scores.
14 findings, 3 critical. Not 200 findings, 3 relevant.
What we don't do
Transparency about scope is as important as transparency about process:
- We don't do compliance paperwork. We'll identify what's relevant and map findings to controls, but we're not writing your SOC 2 policies. That's a different engagement with a different firm.
- We don't do ongoing managed security. Engagements are finite. If you need continuous monitoring, the Security Health Monitor handles that.
- We don't do red team exercises. We review architecture and code for real vulnerabilities. If you need adversarial simulation, we can refer you.
- We don't sell hours. Engagements are scoped by outcome, not by time-and-materials. You know the cost before we start.
How to start
- Book a discovery call. 30 minutes. We'll understand what you're trying to accomplish and whether a security engagement is the right fit.
- We'll propose a scope and timeline. Fixed scope, fixed price, clear deliverables.
- We start. Phase 1 begins within a week of agreement.
No sales process. No multi-week procurement. No mystery.
Already doing the basics and want to know where you stand? The Safe Autonomy Readiness Checklist covers 43 items across 8 sections — from role definition to governance.
Want continuous visibility instead of a point-in-time review? The Security Health Monitor runs weekly scans on your external attack surface and delivers actionable reports.
Ready to talk? Contact Atypical Tech
References
- Cobalt & Cyentia Institute, State of Pentesting Report 2025 — remediation rates, time-to-fix benchmarks
- Verizon, 2025 Data Breach Investigations Report — vulnerability exploitation trends, third-party breach involvement
- VulnCheck, 2024 Exploitation Trends — CVE exploitation rates, CVSS scoring limitations
- FIRST, Exploit Prediction Scoring System (EPSS) — exploit probability vs. CVSS severity
- OWASP, Top Ten A04:2021 — Insecure Design — threat modeling as foundational practice
- NIST, SP 800-218: Secure Software Development Framework — federal threat modeling guidance
- IBM, Cost of a Data Breach Report 2024 — cost of late vs. early defect detection
- ISC2, 2024 Cybersecurity Workforce Study — global talent gap, skills shortage data
- Edgescan, 2025 Vulnerability Statistics Report — mean time to remediate by severity
- Bitsight, KEV Remediation Timelines — CISA KEV deadline compliance rates
Related Posts
The AppSec Acceleration: Why Your Security Tools Can't See Agent Vulnerabilities
Traditional SAST, DAST, and SCA tools were built for request-response architectures. Agent-first systems have vulnerability classes these tools were never designed to detect — and independent research just confirmed it.
Specification as Attack Surface: Why Ambiguity Is a Vulnerability in Agent-First Architectures
Ambiguous specifications aren't just a project management problem anymore. In agent-first architectures, every gap in a spec is a potential security boundary violation — and the agent won't tell you it's guessing.
How We Work: Automation Engagements
What an automation engagement with Atypical Tech actually looks like — from discovery through validation. No paved cow paths, no automation theater, no surprises.