The Tuesday Briefing — Apr 7, 2026

The Big Picture
This week marked a turning point in AI security: major leaks exposed the inner workings of leading AI coding tools, attackers weaponized those leaks within hours, and critical vulnerabilities in widely-used software were exploited before most organizations could patch. If your business uses AI coding assistants, JavaScript libraries, or Citrix/Fortinet systems, you need to act this week.
This Week's Top 5
1. Major AI Coding Tool Leak Turned Into Attacks Within 24 Hours
What happened: Anthropic accidentally published over 512,000 lines of source code for their Claude Code AI assistant on March 31. Within one day, attackers created fake versions packed with malware that steal passwords, crypto wallets, and business credentials.
Why it matters to your business: If your developers use Claude Code, Cursor, GitHub Copilot, or similar AI coding tools, they may have been exposed to these malicious versions. These tools often have access to your company's source code, cloud credentials, and internal systems.
What to do: Ask your IT team to audit which AI coding tools are installed across your organization and verify they're using legitimate, up-to-date versions from official sources only.
2. Attackers Compromised a JavaScript Library Used by Millions
What happened: On March 31, attackers hijacked the "axios" package — a JavaScript library used in millions of websites and applications — and published malicious versions that install remote access tools on developers' computers. Over 600,000 downloads happened in just three hours before it was caught.
Why it matters to your business: If your website or applications use JavaScript (which most do), there's a chance your development team downloaded the compromised version. This gives attackers a backdoor into your systems that can steal data, credentials, and customer information.
What to do: Have your development team check if they use axios and verify they're on clean versions (avoid 1.7.8, 1.6.6, 1.14.1, and 0.30.4). Run a security scan on any systems where developers installed packages between March 31 and April 2.
3. AI Systems Are Making Security Worse, Not Better
What happened: New research from Atlassian showed that AI code review tools miss about 6% more security issues than human reviewers. Meanwhile, a separate study found that 35 new security vulnerabilities in March alone came directly from AI-generated code, up from just 6 in January.
Why it matters to your business: If your team is using AI tools to write code faster, you may be trading speed for security. These tools don't understand the business context or spot unusual attack patterns that human experts catch. The result: more vulnerabilities making it into your production systems.
What to do: Don't eliminate human security review just because you've added AI tools. Instead, use AI to speed up routine tasks, but keep experienced developers reviewing all code before it goes live — especially anything that handles customer data or payments.
4. Critical Flaws in Fortinet and Citrix Are Being Exploited Right Now
What happened: Two widely-used business systems — Fortinet's FortiClient EMS (for managing endpoint security) and Citrix NetScaler (for remote access) — have critical vulnerabilities that attackers are actively exploiting. The government's cybersecurity agency (CISA) added both to their "must patch immediately" list this week.
Why it matters to your business: These systems often protect your remote workers and company networks. If you use them, attackers can potentially bypass all your security, steal data, and move freely through your network. Over 17,000 vulnerable systems are currently exposed to the internet.
What to do: If you use Fortinet FortiClient EMS or Citrix NetScaler, contact your IT provider today to confirm these systems are patched. If they can't patch immediately, disconnect these systems from the internet until they can.
5. AI Agents Can Now Control Your Computer — And That's a Security Problem
What happened: Anthropic released a new feature that lets their Claude AI system directly control computers — clicking, typing, opening apps, and building software without human intervention. Meanwhile, researchers demonstrated how AI "agents" can be tricked into deleting files, leaking passwords, and performing unauthorized actions through carefully crafted web pages.
Why it matters to your business: As these AI systems gain more autonomy, they create new ways for attackers to cause damage. A malicious website or email could trick an AI agent into exposing sensitive company data or making unauthorized changes to your systems — all while looking like legitimate activity.
What to do: If your team is experimenting with AI agents that can take actions on their own (like automated customer service bots or AI assistants with system access), implement "kill switches" and require human approval for any action involving sensitive data, financial transactions, or system changes.
Quick Hits
-
Google fixed a critical vulnerability in Chrome this week — make sure your team's browsers are set to auto-update.
-
A new vulnerability in popular AI assistant "OpenClaw" exposed 135,000 systems, with 63% having no authentication at all.
-
Microsoft published official security guidance for the "OWASP Top 10" risks specific to AI agents — a framework your security team should know.
-
Security tools themselves are being targeted: the Trivy vulnerability scanner was compromised, affecting the European Commission and 23,000 code repositories.
-
Ransomware group Storm-1175 can now deploy ransomware within 24 hours of initial breach — down from the traditional 7-10 days.
-
SonicWall reports small business cyberattacks increased 20.8% this year, with 88% involving ransomware.
-
Node.js suspended its bug bounty program because AI tools are flooding it with low-quality, automated vulnerability reports.
-
A $45 million cryptocurrency loss was traced to attackers "poisoning" an AI trading system's memory with malicious instructions.
One Thing to Do This Week
Audit what AI tools your team is using — especially for coding, content creation, or customer service. Create a simple spreadsheet listing: what AI tool, who uses it, what company data it can access, and whether it can take actions automatically. This 30-minute exercise will help you spot high-risk AI deployments before they become security incidents. If you discover any AI tool with access to customer data or financial systems that can act without human approval, that's your top priority to lock down.
Worth Reading
-
Microsoft's Agent Governance Toolkit — Free, open-source security controls for AI agents, including kill switches and trust scoring.
-
CISA's Known Exploited Vulnerabilities Catalog — The government's list of vulnerabilities being actively attacked; check it weekly and prioritize patches.
-
How the Axios Attack Was Detected — Technical breakdown of this week's supply chain attack and how behavior-based security caught it.
-
AI Privacy Claims Are Not Controls — What your compliance auditors actually require for AI systems (not just marketing promises).
Related Posts
The Tuesday Briefing — Apr 14, 2026
Weekly security intelligence for SMBs. Top threats, quick hits, and one action to take now.
The Tuesday Briefing — Mar 31, 2026
Weekly security intelligence for SMBs. Top threats, quick hits, and one action to take now.
The Tuesday Briefing — Mar 24, 2026
Weekly security intelligence for SMBs. Top threats, quick hits, and one action to take now.