
Image via AlphaSignal
Tuesday, February 24, 2026
Claude under siege: 16M fraudulent API calls exposed
The big story: Anthropic is sounding the alarm on coordinated distillation attacks from Chinese AI labs that allegedly made 16 million fraudulent API calls to steal Claude's capabilities (yikes). Meanwhile, OpenAI is playing offense with multiyear deals with consulting giants to mainstream enterprise AI adoption, and their GPT-5.3-Codex just crushed a wild 25-hour coding sprint, generating 30k lines of code for a design tool without stopping. Oh, and Anthropic's Claude Code Security is now catching vulnerabilities before hackers even find them. So here's the question: if AI can steal AI, who's actually safe?

Image via AlphaSignal
Top Stories
CNBC
OpenAI partners with major consulting firms to accelerate enterprise deployment of its Frontier AI platform, recognizing that scaling AI adoption requires deep implementation expertise and existing customer relationships the consulting giants possess.
AlphaSignal
Anthropic revealed that Chinese AI labs conducted massive coordinated attacks to steal Claude's capabilities through API distillation, undermining export controls and creating national security risks by proliferating unprotected AI systems.
OpenAI
GPT-5.3-Codex completed a 25-hour autonomous design tool project, demonstrating that AI agents can now reliably handle long-horizon software tasks through structured workflows, persistent project memory, and continuous verification—marking a transition from tool babysitting to trusted teammate behavior.
Anthropic
Anthropic's Claude Code Security uses advanced reasoning to find complex vulnerabilities that traditional pattern-matching tools miss, addressing a critical security gap as AI-enabled attacks become more sophisticated and defender workloads overwhelm security teams.
Keep Reading
Industry Voices
Enjoyed this issue?
Get daily AI intel delivered to your inbox. No fluff, just the stories that matter.