The Tuesday Briefing — Mar 10, 2026

The Big Picture
AI tools are creating two major problems at once: attackers are using AI to break into systems faster than ever, while the AI tools businesses rely on for productivity have serious security holes. This week saw the first confirmed attacks exploiting AI assistants in real-world situations, plus a wave of critical vulnerabilities in popular AI coding tools. If your team uses AI assistants like ChatGPT, Claude, or GitHub Copilot, you need to take action now.
This Week's Top 5
1. AI Assistants Under Attack: First Real-World Exploits Confirmed
What happened: Security researchers confirmed the first attacks where hackers manipulated AI assistants by hiding malicious instructions in websites and emails. When employees used AI tools to summarize that content, the hidden instructions took over, stealing credentials and accessing sensitive files.
Why it matters to your business: If your team uses AI chatbots or browser assistants to summarize emails, research topics, or draft content, attackers can now weaponize ordinary web pages and messages to hijack those tools. The AI follows the hidden instructions without your employee ever knowing.
What to do: Create a policy today: employees should not paste customer data, financial information, passwords, or proprietary business details into any AI chatbot (ChatGPT, Claude, Copilot, etc.) unless your IT team has specifically approved and secured that tool for company use.
2. Critical Security Holes in AI Coding Tools
What happened: Major vulnerabilities were discovered in Claude Code and other AI coding assistants used by developers. The most dangerous: simply opening a malicious code project can automatically steal API keys, passwords, and access tokens in milliseconds—before any security warning appears.
Why it matters to your business: If your development team uses AI coding assistants (80% of Fortune 500 companies do), those tools can become a backdoor into your systems. Stolen API keys give attackers access to your cloud services, customer databases, and code repositories.
What to do: If you have developers, ask your IT lead this week: "Do we use AI coding tools like GitHub Copilot or Claude Code? If yes, have we reviewed their security settings and do we rotate our API keys monthly?" Start rotating API keys immediately if you haven't already.
3. AI Vulnerabilities Surge 33-109% in One Year
What happened: A major security report found that vulnerabilities in AI agent systems exploded this year, with 363 documented security flaws—up to 109% more than last year. Over 83% of AI systems researchers examined were using outdated, insecure communication protocols.
Why it matters to your business: Businesses are rushing to adopt AI tools faster than security teams can protect them. The result: most AI systems in use today have known security holes that attackers can exploit to access your data, inject malicious commands, or take over automated processes.
What to do: Create an inventory this week: list every AI tool your business uses (chatbots, automated customer service, AI writing assistants, scheduling tools, etc.). For each one, document who has access and what data it can see. This is your starting point for managing AI security risk.
4. Hackers Used AI to Break Into 600+ Fortinet Devices in 8 Minutes
What happened: Attackers used an AI-powered hacking tool to compromise over 600 corporate firewalls across 55 countries in just 8 minutes. The tool combined artificial intelligence with traditional hacking techniques to scan for weaknesses and automatically exploit them at machine speed.
Why it matters to your business: AI is making attacks faster, cheaper, and more effective. What used to take human hackers days or weeks now happens in minutes. Small businesses are especially vulnerable because attackers can now hit hundreds of targets simultaneously with AI-automated attacks.
What to do: If you use Fortinet FortiGate firewalls, contact your IT provider or Fortinet directly to verify you have the latest security patches installed. More broadly: ensure your firewalls and network devices aren't using default passwords and have strong, unique credentials.
5. Major AI Service Outage Shows Dependency Risk
What happened: Anthropic's Claude AI service experienced a major global outage lasting several hours, affecting businesses worldwide that rely on it for customer service, content creation, and software development. The outage cascaded through multiple services when demand surged after Claude topped the App Store charts.
Why it matters to your business: If your business operations depend entirely on one AI service, an outage can shut down critical workflows. Many businesses discovered they had no backup plan when Claude went down, leaving customer support, content teams, and developers unable to work.
What to do: Identify which AI tools your business depends on for daily operations. For any critical workflows, create a backup plan: either have an alternative AI service ready, or document how to perform the task manually if the AI service goes down. Don't put all your eggs in one AI basket.
Quick Hits
-
Google is offering up to $10 million in bug bounties for vulnerabilities in its AI models—the largest AI security reward program ever, signaling how seriously Big Tech is taking AI security threats.
-
Hackers stole 150GB of data from Mexican government agencies by "jailbreaking" Claude AI with over 1,000 carefully crafted prompts to bypass its safety controls.
-
A Chrome browser vulnerability allowed malicious extensions to hijack Google's Gemini AI assistant, accessing cameras, microphones, and local files without user consent (now patched).
-
The cURL project permanently shut down its bug bounty program after AI-generated spam reports reached 20% of submissions with only 5% being valid security issues.
-
New research shows 63% of employees pasted sensitive company data into personal AI chatbots in 2025, with each resulting data breach costing an average of $670,000 more than traditional breaches.
-
Survey data reveals 80% of organizations observe risky AI behaviors including unauthorized data access, but only 21% of executives have full visibility into what their AI tools can actually access.
-
A federal executive order now requires government contractors to implement vulnerability disclosure programs, creating new compliance requirements that may extend to subcontractors and suppliers.
One Thing to Do This Week
Audit which AI tools your team is actually using—not just what IT has approved. Ask each department manager to list any AI assistants, chatbots, or automation tools their teams have adopted in the last six months. You'll likely discover "shadow AI" that's accessing company data without proper security controls. Once you have the list, require IT approval for any AI tool that connects to company email, files, code repositories, or customer data. This simple inventory exercise costs nothing but can prevent the kind of data breaches that force 40% of small businesses to close permanently.
Worth Reading
-
Cloudflare's 2026 Threat Report — Detailed analysis of how AI is changing both attacks and defense, including how Cloudflare's own AI found a critical vulnerability in their code.
-
Trend Micro's State of AI Security — The report documenting the 363 AI vulnerabilities and why AI agent security needs immediate attention.
-
Palo Alto Unit 42 Research on Prompt Injection — Technical deep-dive into the first confirmed real-world AI assistant attacks (advanced readers only).
-
Help Net Security on Agentic Browser Vulnerabilities — Explains how the "PleaseFix" vulnerability family works and what makes AI agents different from traditional software.
Related Posts
The Tuesday Briefing — Apr 14, 2026
Weekly security intelligence for SMBs. Top threats, quick hits, and one action to take now.
The Tuesday Briefing — Apr 7, 2026
Weekly security intelligence for SMBs. Top threats, quick hits, and one action to take now.
The Tuesday Briefing — Mar 31, 2026
Weekly security intelligence for SMBs. Top threats, quick hits, and one action to take now.