← Back to Blog

The Tuesday Briefing — Mar 17, 2026

6 min readAtypical Tech
Illustration for The Tuesday Briefing — Mar 17, 2026

The Big Picture

AI tools crossed a line this week. They're no longer just helpful assistants with security quirks — they're serious attack vectors. Autonomous AI hacked a major consulting firm in under two hours. The most popular coding assistants are shipping vulnerabilities in 87% of their output. And a government banned an AI tool entirely after 10,000 installations were compromised.

If your business uses AI assistants, coding tools, or is even thinking about it — this week's news is the wake-up call.

This Week's Top 5

1. AI Hacked a Major Consulting Firm in Under Two Hours

What happened: An autonomous AI security tool broke into McKinsey's internal AI platform in just 118 minutes. It chained together multiple attack vectors without human help — finding weak spots, exploiting them, and reporting back. All on its own.

Why it matters to your business: Traditional security assumes human attackers who work slowly and make mistakes. AI attackers don't. They work at machine speed, testing thousands of possibilities in minutes. Your security team might have days to patch problems before a human hacker finds them — but only hours before an AI does.

What to do: If you're building or using AI tools, require real-time monitoring that alerts you to unusual API activity within minutes, not days. Ask your IT provider: "Can we detect automated attacks happening in real-time?"

2. AI Coding Tools Are Creating More Security Holes Than They Fix

What happened: A security study found that 87% of code changes made by AI assistants like Claude, GitHub Copilot, and Gemini contained security vulnerabilities — especially in critical areas like login systems and payment processing. Separately, fake installation pages for these tools are spreading malware that steals developer credentials.

Why it matters to your business: If your developers use AI coding assistants to work faster, they may be unknowingly adding security weaknesses to your applications. We're talking broken authentication, missing brute-force protection, and wide-open input validation.

What to do: Require manual security review for any code touching authentication, payments, or customer data — even if AI wrote it. Install security scanning tools in your development pipeline that specifically check for the vulnerabilities AI tends to create.

3. China Banned an AI Tool Over Security Flaws — You Might Be Using Something Similar

What happened: China's government banned the OpenClaw AI agent from official use after discovering it had no authentication by default, stored sensitive data in plain text, and could be exploited to steal data or take control of systems. Over 10,000 installations were compromised through malicious add-ons.

Why it matters to your business: Open-source AI tools are attractive because they're free and powerful. But they often ship without basic security controls. If employees are installing AI assistants on their own — what security teams call "shadow IT" — they may be creating backdoors into your network without realizing it.

What to do: Create an approved list of AI tools your company can use, and require IT approval before installing anything new. Three questions for every tool: Does it require authentication? Does it encrypt stored data? Where does it send your information?

4. Documents Can Now Hack AI Systems

What happened: Security researchers demonstrated that malicious instructions hidden in documents — passport scans, ID photos, invoices — can hijack AI systems that process them. In tests on financial Know Your Customer (KYC) systems, these hidden instructions caused AI to leak sensitive data 80% of the time.

Why it matters to your business: If you use AI to process customer documents, invoices, contracts, or support tickets, attackers can embed commands in those files that your AI follows instead of its original instructions. This is especially dangerous for financial services, healthcare, and HR departments processing identity documents.

What to do: If you're using AI to process documents, add a sanitization layer that strips potentially harmful content before the AI sees it. Ask your AI vendor: "How do you protect against prompt injection through uploaded documents?"

5. Exposed VPN Without MFA Is Now the #1 Insurance Claim

What happened: According to cybersecurity firm ESET, open VPN servers without multi-factor authentication have surpassed remote desktop as the leading cause of cyber insurance claims for small and mid-size businesses. Attackers are buying stolen credentials and walking right in.

Why it matters to your business: With 3.3 billion credentials stolen last year, there's a good chance your employees' passwords are already for sale. Without MFA, attackers can simply buy those credentials and log into your VPN as if they were legitimate users. Your security systems won't know the difference.

What to do: Enable multi-factor authentication on your VPN this week. If your VPN doesn't support MFA, schedule time to replace it with one that does. This is also likely required by your cyber insurance policy.

Quick Hits

  • Microsoft patched 83 vulnerabilities this month including two being actively exploited — update Windows systems immediately.

  • Google Chrome had two zero-day vulnerabilities that allowed code execution just by visiting a website — update Chrome now.

  • Cisco released patches for firewall vulnerabilities rated 10.0/10.0 severity (the maximum) — if you use Cisco firewalls, apply updates urgently.

  • The U.S. government issued a new cyber strategy requiring post-quantum cryptography readiness within 90 days for federal suppliers.

  • Gartner predicts over 1,000 lawsuits against companies for harm caused by AI agents in 2026 — with executives potentially held personally liable.

  • OpenAI acquired Promptfoo, a security testing tool used by 25% of Fortune 500 companies, signaling that AI security testing is now mainstream.

  • Ethereum Foundation quadrupled its maximum bug bounty reward to $1 million for critical vulnerabilities.

One Thing to Do This Week

Audit which AI tools your employees are actually using. Send a simple survey: "What AI assistants, chatbots, or coding tools do you use for work?" Include options like ChatGPT, Claude, Copilot, Gemini, and an "other" field. Once you have the list, you can create an approved list, set up proper access controls, and train employees on safe AI usage. This single action addresses "shadow AI" — the number one gap in most SMB security programs right now.

Worth Reading

Related Posts