← Back to Blog

The Tuesday Briefing — Mar 31, 2026

6 min readAtypical Tech
Illustration for The Tuesday Briefing — Mar 31, 2026

The Big Picture

This week, AI-powered tools moved from helpful assistants to serious security risks. Attackers are now using AI to break into systems faster than companies can defend them — sometimes in under 24 hours. Meanwhile, the very tools meant to protect your code and systems (like security scanners and AI coding assistants) were themselves compromised. If your business uses AI tools, this week's news is your wake-up call.

This Week's Top 5

1. Security Scanners Were Turned Into Weapons

What happened: Hackers compromised Trivy, a popular security scanning tool used by thousands of companies including NASA and Netflix. They modified it to steal passwords, cloud access keys, and other credentials from the very systems it was supposed to protect.

Why it matters to your business: If you use automated security tools in your software development process, they may have been stealing your passwords instead of protecting you. This affects any company using modern development practices.

What to do: If your team uses Trivy or similar security scanners, immediately rotate all passwords, API keys, and cloud credentials that those systems could access. Then verify you're using the official, uncompromised versions.

2. AI Tools Can Now Control Your Computer

What happened: Anthropic released a new feature for Claude (their AI assistant) that lets it control your mouse, keyboard, and files directly. While this can boost productivity, researchers also demonstrated how attackers can exploit AI assistants to steal data or execute malicious actions without any clicks from you.

Why it matters to your business: If your employees use AI assistants like Claude, GitHub Copilot, or Microsoft Copilot, those tools may have more access to your systems and data than you realize — and attackers are learning how to hijack them.

What to do: Create a policy for AI tool use that specifies what data and systems they can access. Don't let AI assistants touch sensitive files, customer data, or production systems without explicit approval and monitoring.

3. Hackers Are Exploiting Vulnerabilities in Under 24 Hours

What happened: Multiple security experts at the RSA Conference confirmed that AI is helping attackers find and exploit software vulnerabilities in less than a day — down from weeks or months. One ransomware group exploited a Cisco firewall flaw for 30 days before Cisco even announced it existed.

Why it matters to your business: The traditional approach of "we'll patch it next week" is dead. Attackers now move faster than your IT team's monthly update schedule, meaning unpatched systems are compromised almost immediately.

What to do: Enable automatic security updates for all critical systems — especially firewalls, VPNs, and anything that connects to the internet. If you can't auto-update, establish a 24-48 hour emergency patching process for critical vulnerabilities.

4. AI-Generated Code Contains Security Flaws Half the Time

What happened: Research found that code written by AI tools like GitHub Copilot and Claude Code passes security checks only 55% of the time — meaning nearly half contains vulnerabilities. Even worse, 35 new security flaws were discovered in March alone that were directly caused by AI-generated code, up from just 6 in January.

Why it matters to your business: If your developers are using AI to write code faster (and many are, whether you know it or not), they're likely introducing security holes at an unprecedented rate. This creates hidden vulnerabilities that attackers will find before you do.

What to do: Require human security review for any code written with AI assistance. Treat AI as a junior developer — helpful but requiring supervision — not as a replacement for experienced developers who understand security.

5. Most Companies Can't Tell AI Actions from Human Actions

What happened: A survey found that 68% of organizations cannot distinguish between actions taken by AI systems and those taken by humans. Additionally, 74% of AI systems have been given excessive permissions, accessing far more data and systems than they need.

Why it matters to your business: When an AI assistant has the same access as your IT administrator — and you can't tell what it's doing — a single malicious command or hijacked AI tool could access your entire business. This is like giving a temp worker the master keys and no supervision.

What to do: Audit what AI tools have access to in your business right now. Create separate, limited accounts for AI assistants that only access what they absolutely need, and implement logging so you can see what they're doing.

Quick Hits

  • Citrix NetScaler, a common business VPN, has critical flaws being actively exploited — update immediately if you use it.

  • A zero-click attack was demonstrated against Microsoft Copilot, Google Gemini, and Salesforce's AI tools, proving these aren't theoretical risks.

  • Government contractors must now comply with updated NIST security standards (SP 800-171 Rev 3) with only one hour to report incidents.

  • Healthcare organizations face new mandatory security requirements with no exceptions for older systems — HIPAA rules are tightening significantly.

  • AI coding assistants like Cursor were found to have no file protection, potentially letting attackers read any file on your developers' computers.

  • OpenAI launched a bug bounty program paying up to $100,000 for finding security flaws in their AI systems — a sign of how serious these risks are.

  • Small businesses are spending 11% more on cybersecurity in 2026, but 60% still have poor security posture despite this investment.

One Thing to Do This Week

Make a list of every AI tool your company uses — ChatGPT, Copilot, Claude, or any other assistant. For each one, write down what data it can access and who's using it. Many businesses discover they have "shadow AI" — employees using AI tools on sensitive data without IT knowing. This 30-minute inventory is the foundation for every other security decision you'll make about AI. If you find tools being used that you didn't know about, that's your biggest risk and should be addressed first.

Worth Reading

Related Posts