← Back to Blog

The Tuesday Briefing — Feb 24, 2026

6 min readAtypical Tech
Illustration for The Tuesday Briefing — Feb 24, 2026

The Big Picture

This week marked a turning point: AI-powered tools aren't just changing how we work—they're changing how attackers break in. Security researchers discovered that AI assistants like coding tools and automated agents have serious security holes, while attackers are using AI to break into systems four times faster than before. If your business uses any AI tools (and most do, even without realizing it), this week's news matters to you.

This Week's Top 5

1. AI Coding Assistants Can Leak Your Entire Codebase Without Anyone Noticing

What happened: Security researchers discovered that popular AI coding tools like Cursor can be tricked into silently copying your entire company's code to an attacker's account, with no warning signs or security alerts to tip you off.

Why it matters to your business: If your developers use AI coding assistants (tools that help write code faster), attackers can steal your proprietary software, customer data, or trade secrets without leaving a trace. Over 200,000 people have downloaded some of the infected add-ons.

What to do: Talk to your development team about which AI coding tools they're using. Ask them to only install extensions or "skills" from verified, official sources—not random developers on the internet.

2. Attackers Now Break In and Steal Data in Just 72 Minutes

What happened: A major security firm analyzed 750+ cyberattacks and found that AI-powered attackers now complete the entire attack—from first breaking in to stealing your data—in as little as 72 minutes. That's four times faster than last year.

Why it matters to your business: Traditional security approaches assume you have days to detect and respond to an attack. You now have hours at most. If you're relying on weekly security reviews or manual monitoring, you're already too late.

What to do: Set up automated alerts for suspicious activity on your most critical systems (email, file servers, financial software). If you don't have 24/7 monitoring, consider a managed security service—the cost is far less than recovering from a breach.

3. AI Agents Have Massive Security Holes—And Companies Are Using Them Anyway

What happened: Over 80% of Fortune 500 companies are now using AI "agents"—automated tools that can read emails, access files, and take actions on their own. Security researchers found over 21,000 of these agents exposed online with serious vulnerabilities, and discovered that malware is now specifically designed to steal AI agent credentials.

Why it matters to your business: AI agents often get access to everything—your email, documents, calendar, and company systems—without the security controls you'd use for human employees. When compromised, they can delete databases or leak sensitive information through what looks like normal activity.

What to do: Make a list of every AI tool your team uses that can access company data or take actions automatically. Treat each one like a new employee: What data can it access? Who approved it? Can you revoke its access if something goes wrong?

4. A Simple Trick Can Make AI Do Anything—Including Attack Your Business

What happened: Multiple security teams discovered that AI assistants from major companies (including Anthropic's Claude) can be manipulated through "prompt injection"—hidden instructions in documents, emails, or calendar invites that make the AI ignore its safety rules and follow attacker commands instead.

Why it matters to your business: If you use AI tools to summarize emails, analyze documents, or schedule meetings, attackers can hide malicious instructions in content that looks completely normal. The AI might leak confidential information, approve fraudulent requests, or give attackers access to your systems—all while appearing to work normally.

What to do: Never give AI assistants permission to take actions without human approval—especially financial transactions, system changes, or external communications. Configure your AI tools to only suggest actions, not execute them automatically.

5. Chinese Labs Stole Advanced AI Capabilities Using Fake Accounts

What happened: Anthropic (maker of Claude AI) revealed that three Chinese AI labs used 24,000 fake accounts and 16 million conversations to systematically extract Claude's capabilities, including how to bypass safety restrictions. This represents a new type of supply chain attack specifically targeting AI systems.

Why it matters to your business: If you're building business processes around specific AI capabilities, competitors or attackers can steal those same capabilities—including weaknesses and workarounds. What seems like a secure, proprietary AI tool today might have its tricks copied tomorrow.

What to do: Don't put all your competitive advantage into AI features alone. Combine AI tools with human expertise, proprietary data, and business relationships that can't be easily copied through technical means.

Quick Hits

  • Over 600 FortiGate firewalls were compromised in five weeks by a "low-to-medium skill" attacker using AI to automate the attacks—proving AI makes even basic attackers more dangerous.

  • Anthropic released a new AI security tool that found 500+ serious vulnerabilities in widely-used code, but the same tool raised concerns because the AI that powers it creates more vulnerable code than previous versions.

  • Ransomware groups are shifting away from well-defended large companies to target small and mid-size businesses, which appeared in 88% of ransomware attacks (versus 39% for enterprises).

  • Security researchers warn that AI tools like Microsoft Copilot can leak data even after Microsoft patches the vulnerability—because the real problem is how companies misconfigure the tools, not the software itself.

  • A critical vulnerability in widely-used remote support software (BeyondTrust) is being actively exploited, with over 16,000 systems still exposed to attack.

  • The first official security framework for AI agents was just released by OWASP (a major security standards organization), identifying ten critical risks that most companies aren't addressing.

  • Time to patch vulnerabilities is more critical than ever: 32% of vulnerabilities are now being exploited before patches are available (called "zero-days").

One Thing to Do This Week

Audit your AI tools. Spend 30 minutes making a list of every AI tool your company uses—including browser extensions, coding assistants, chatbots, and automated systems. For each one, write down: (1) what company data it can access, (2) whether it can take actions automatically, and (3) who approved it. You'll probably discover AI tools you didn't know existed, often installed by individual employees trying to work faster. This simple inventory is the first step to managing AI security risk, and it costs nothing but time. If you discover tools with broad access that nobody approved, start there—review whether they're necessary and what security settings they have.

Worth Reading

Related Posts