How We Work: Automation Engagements

Most automation projects fail quietly. They launch with fanfare, work for a few weeks, and then drift into disuse — or worse, keep running while producing wrong outputs that nobody checks. Research from Virtuoso QA puts the failure rate at 73% for automation projects. McKinsey's data on large IT projects is equally grim: they deliver 56% less value than predicted on average.
The problem isn't the tooling. It's that most teams automate their existing workflow as-is — what Jim Highsmith calls "paving the cow paths." Automating a broken process just makes it fail faster. Peer-reviewed research in Production Planning & Control found that automating a process without first streamlining it generates new problems that slow down flow and increase errors.
We work differently. This post describes what an automation engagement with Atypical Tech actually looks like — the structure, the decisions, the deliverables, and what you'll have when it's done.
No paved cow paths. No automation theater. No workflows that run but don't actually work.
Who this is for
Automation engagements make sense when you have manual workflows consuming more time than they're worth:
- Engineering teams buried in repetitive ops — deployment checklists, environment provisioning, manual triage of alerts. Asana's Anatomy of Work research found knowledge workers spend 62% of their day on coordination tasks, leaving only 13% for the strategic work they were hired to do.
- Growing teams that can't hire fast enough — headcount is expensive and slow. Automation extends the capacity of the people you already have.
- Companies deploying AI or agents who need guardrails, not just capabilities — the DORA 2024 State of DevOps Report found that elite-performing teams use automation to make releases more reliable, not just faster.
- Teams with failed automation attempts who tried "just add a tool" and ended up with more overhead, not less.
If you need security review rather than automation, the companion post How We Work: Security Engagements covers that process.
The philosophy: Remove, Reduce, Reveal
Before we touch any tooling, we apply a framework we call Remove, Reduce, Reveal — drawn from our Safe Autonomy first principles:
-
Remove — Eliminate work that shouldn't exist. Manual approvals nobody reads. Reports nobody uses. Steps that persist because "we've always done it that way." The most efficient automation is the one you don't build because the work it would automate was unnecessary.
-
Reduce — Simplify what remains. Standardize inputs, collapse redundant steps, eliminate context switches. Lean methodology calls this ESSA: Eliminate, Simplify, Standardize, then Automate — a Centric Consulting case study used this approach to reduce a 288-step process to 35 steps, cutting lead time by 60%.
-
Reveal — Make the automated workflow observable. Every action produces evidence. Every decision has a trail. Humans can see what the automation did, why, and intervene when it's wrong. This isn't optional — it's how you maintain trust as autonomy expands.
This sequence matters. Teams that skip straight to "automate everything" end up in the J-curve — productivity drops before it rises, because they've added tooling overhead without removing the underlying complexity. We've seen it enough times to build the prevention into our process.
The most common automation failure is automating the wrong thing perfectly.
The structure
Every engagement follows four phases. The scope varies — some engagements are two weeks, some are six — but the structure doesn't change.
Phase 1: Discovery and workflow mapping (Week 1)
This is where we figure out what to automate, what to eliminate, and what to leave alone. Not a feature requirements session — a workflow audit built from observation, not assumptions.
What happens:
- Discovery call: what are the workflows consuming the most time? Where are the bottlenecks? What manual steps create the most errors or frustration?
- Workflow observation: we watch how work actually flows, not how the wiki says it flows. The gap between the two is usually where the biggest wins hide.
- Time and friction mapping: where does time actually go? IDC research found knowledge workers spend 2.5 hours per day — 30% of their workday — just searching for information.
- Remove/Reduce pass: before we design anything, we identify what can be eliminated or simplified. This typically removes 20-40% of the work before automation even enters the picture.
What you get:
- A workflow map showing current state, pain points, and time sinks
- A Remove/Reduce/Reveal analysis — what goes away, what gets simplified, what gets automated
- A prioritized automation plan — sequenced by impact and risk
- Estimated ROI for each workflow being considered
We don't start building until we understand what we're building and why. Discovery is where the 73% failure rate is prevented — by not automating the wrong things.
Discovery isn't overhead. It's where the highest-leverage decisions happen.
Phase 2: Build (Weeks 2–3)
This is the hands-on work. What we build depends on what discovery surfaced, but typically covers some combination of:
Workflow automation:
- Data pipeline automation — ingestion, transformation, delivery
- Notification and escalation routing
- Approval workflows with appropriate human checkpoints
- Integration between systems that currently require manual bridging
- Scheduled operations replacing manual checklists
For teams deploying AI or agents:
- Guardrails and constraint enforcement — what the agent can and cannot do
- Output validation pipelines — automated checks on agent-generated content
- Human-in-the-loop escalation points — when confidence is low, the automation asks, it doesn't guess
- Audit trail infrastructure — every agent action logged, every decision traceable
This maps directly to the ROBOT framework: Role-scoped access, clear Objectives, enforced Boundaries, comprehensive Observability, controlled Taskflow. Every automation we build has these five properties by default, not as an add-on.
Why security expertise matters here:
Every workflow you automate inherits credentials, accesses systems, and makes decisions. Datadog's 2024 State of DevSecOps report found that 63% of CI/CD pipelines authenticate using long-lived credentials — one of the most common causes of data breaches. Black Duck's 2024 research found that nearly half of organizations knowingly deploy vulnerable code under time pressure.
We think like security engineers because we are security engineers. That means:
- Credentials are scoped to minimum necessary access, rotated, and never shared across workflows
- Automation operates with the principle of least privilege — it can do what it needs and nothing more
- Failure modes are designed, not discovered in production
- Every integration point is a trust boundary that we explicitly define and monitor
Automation built without security expertise is a new attack surface with privileged access.
Phase 3: Validation (Week 4)
We don't ship and walk away. The validation phase is where we prove the automation works correctly, handles edge cases, and doesn't break when conditions change.
What happens:
- Parallel running: the automation and the manual process run side by side. Outputs are compared. Discrepancies are investigated and resolved.
- Edge case testing: what happens when inputs are malformed? When an API is down? When the volume is 10x normal? When permissions change?
- Graduated handoff: the team operates the automation with us available, then without us, with a clear escalation path.
- Metrics baseline: we establish what "normal" looks like so deviations are detectable from day one.
Research on human-in-the-loop autonomy from the International Journal of Robotics Research confirms this pattern: systems that keep humans in a monitoring loop during initial deployment achieve more reliable operation and can safely expand autonomy over time. Human interventions during this phase aren't failure — they're calibration data.
What you get:
- Validated automation running in production
- Runbooks for ongoing operation and troubleshooting
- Monitoring and alerting configuration
- A documented expansion path — what to automate next, and when it's safe to do so
The validation phase is also where we ensure the automation is maintainable without us. If it only works while we're around, we haven't delivered an asset — we've delivered a dependency.
If the automation requires the consultant to operate, it's not automation.
What this looks like: a real engagement shape
A 30-person B2B SaaS company. Node.js backend, AWS infrastructure. Two engineers spending roughly 15 hours per week on deployment checklists, alert triage, and environment provisioning. No dedicated DevOps or platform team.
Week 1: Discovery and workflow mapping. Observed the actual deployment process — 23 manual steps, including 4 that were documented wrong. Identified that alert triage consumed 8 hours per week, but 70% of alerts were noise that could be filtered automatically. Removed 6 deployment steps that existed because of a migration two years ago that nobody cleaned up.
Weeks 2–3: Built three automations: (1) deployment pipeline that reduced 23 manual steps to a single command with rollback capability, (2) alert triage pipeline that automatically enriched, deduplicated, and prioritized incoming alerts — routing true positives to Slack and suppressing noise, (3) environment provisioning from a 2-hour manual process to a 5-minute parameterized workflow. All three built with scoped credentials, audit trails, and circuit breakers.
Week 4: Parallel-ran all three automations against the existing manual processes. Found two edge cases in the deployment pipeline (specific database migration sequencing) and one in alert triage (a vendor-specific alert format that wasn't being parsed correctly). Fixed both. Established baselines: deployment time dropped from 45 minutes to 8 minutes. Alert noise reduced by 85%. Environment provisioning went from 2 hours to 5 minutes.
Result: 15 hours per week of engineering time recovered. The two engineers who had been doing manual ops were reassigned to product work within the month. The deployment pipeline reduced release anxiety enough that the team moved from weekly to daily deploys — consistent with what the DORA research shows about the relationship between deployment automation and deployment frequency.
Three automations, 15 hours recovered. Not by working faster — by eliminating the work that shouldn't have been manual.
What we don't do
Transparency about scope matters as much as transparency about process:
- We don't sell tools. We're not reselling Zapier or recommending a platform. We build what solves the problem, using whatever technology fits. Sometimes that's a script. Sometimes it's a workflow engine. Sometimes it's an agent with guardrails.
- We don't do big-bang rollouts. Every automation starts small and expands based on demonstrated success. If someone promises to "automate your entire operations in 6 weeks," be skeptical.
- We don't automate for automation's sake. If discovery reveals that the best answer is "hire someone" or "change the process" rather than "build a workflow," we'll tell you. Our incentive is your outcome, not billable automation hours.
- We don't sell hours. Engagements are scoped by outcome, not by time-and-materials. You know the cost before we start.
The connection to security
This is the part most automation consultancies can't offer. We don't just build workflows — we build workflows that don't become security liabilities.
Every automated process has credentials, network access, and decision-making authority. Without security expertise informing the design, you get:
- Shared credentials that can't be rotated without breaking the workflow
- Over-privileged service accounts that can access far more than the automation needs
- No audit trail for what the automation actually did
- No circuit breakers for when the automation does something unexpected
Our Security Engagements and Automation Engagements share the same foundational principle: safety before speed. The automation we build is trustworthy because we think like attackers. The security work we do is efficient because we know how to automate what should be automated.
That's not two separate value propositions. It's one.
How to start
- Book a discovery call. 30 minutes. We'll understand your workflows, where time is being lost, and whether an automation engagement is the right fit.
- We'll propose a scope and timeline. Fixed scope, fixed price, clear deliverables.
- We start. Phase 1 begins within a week of agreement.
No sales process. No multi-week procurement. No mystery.
Already doing the basics and want to know where you stand? The Safe Autonomy Readiness Checklist covers 43 items across 8 sections — from role definition to governance.
Need a security review instead? Read How We Work: Security Engagements for the security-focused engagement process.
Ready to talk? Contact Atypical Tech
References
- Virtuoso QA, 73% of Test Automation Projects Fail — automation project failure rates and root causes
- McKinsey, Delivering Large-Scale IT Projects on Time, on Budget, and on Value — IT project value delivery shortfalls
- Highsmith, Paving Cow Paths — anti-pattern of automating inefficient processes as-is
- Taylor & Francis, Lean First, Then Automate — peer-reviewed research on process streamlining before automation
- Asana, Anatomy of Work Index — knowledge worker time allocation research
- Centric Consulting, ESSA Framework Case Study — 288-to-35 step process reduction
- DORA / Google Cloud, 2024 State of DevOps Report — deployment automation and team performance
- Datadog, State of DevSecOps 2024 — CI/CD credential security, pipeline automation gaps
- Black Duck, 2024 Global State of DevSecOps — vulnerable code deployment under time pressure
- Liu et al., Robot Learning on the Job: Human-in-the-Loop Autonomy — graduated autonomy through human oversight during deployment
Related Posts
Your Token Budget Is a Security Control
Most teams treat token spend limits as cost management. They are blast radius containment. An autonomous agent with no spending ceiling is not a productivity tool — it is an uncontrolled liability.
The AppSec Acceleration: Why Your Security Tools Can't See Agent Vulnerabilities
Traditional SAST, DAST, and SCA tools were built for request-response architectures. Agent-first systems have vulnerability classes these tools were never designed to detect — and independent research just confirmed it.
How We Work: Security Engagements
What a security engagement with Atypical Tech actually looks like — from the first call to the final deliverable. No mystery, no overhead, no surprises.