← Back to archive
Claude under siege: 16M fraudulent API calls exposed

Image via AlphaSignal

Tuesday, February 24, 2026

Claude under siege: 16M fraudulent API calls exposed

The big story: Anthropic is sounding the alarm on coordinated distillation attacks from Chinese AI labs that allegedly made 16 million fraudulent API calls to steal Claude's capabilities (yikes). Meanwhile, OpenAI is playing offense with multiyear deals with consulting giants to mainstream enterprise AI adoption, and their GPT-5.3-Codex just crushed a wild 25-hour coding sprint, generating 30k lines of code for a design tool without stopping. Oh, and Anthropic's Claude Code Security is now catching vulnerabilities before hackers even find them. So here's the question: if AI can steal AI, who's actually safe?

Top Stories

1
OpenAI Lands Multiyear Deals with Consulting Giants in Enterprise Push

CNBC

OpenAI partners with major consulting firms to accelerate enterprise deployment of its Frontier AI platform, recognizing that scaling AI adoption requires deep implementation expertise and existing customer relationships the consulting giants possess.

openaienterprise-aiagentspartnerships
2
Anthropic Claims Three AI Labs Stealing Claude's Capabilities at Scale

AlphaSignal

Anthropic revealed that Chinese AI labs conducted massive coordinated attacks to steal Claude's capabilities through API distillation, undermining export controls and creating national security risks by proliferating unprotected AI systems.

anthropicsecuritychinaexport-controls
3
GPT-5 Codex Ran a 25-Hour Coding Sprint

OpenAI

GPT-5.3-Codex completed a 25-hour autonomous design tool project, demonstrating that AI agents can now reliably handle long-horizon software tasks through structured workflows, persistent project memory, and continuous verification—marking a transition from tool babysitting to trusted teammate behavior.

codexagentslong-horizon-tasksautonomous-coding
4
Claude Code Security Finds Flaws Before Hackers Do

Anthropic

Anthropic's Claude Code Security uses advanced reasoning to find complex vulnerabilities that traditional pattern-matching tools miss, addressing a critical security gap as AI-enabled attacks become more sophisticated and defender workloads overwhelm security teams.

anthropicai-securityvulnerability-detectioncode-analysis

Keep Reading

Industry Voices

Enjoyed this issue?

Get daily AI intel delivered to your inbox. No fluff, just the stories that matter.