← Back to archive
Agents are moving faster than regulators can handle

Image via Zenity Labs

Thursday, February 19, 2026

Agents are moving faster than regulators can handle

Google just dropped Lyria 3 in Gemini and made music generation as frictionless as asking for toast (wild), while Anthropic is sounding the alarm that real-world AI agents are gaining autonomy way faster than regulations can actually keep pace (yikes). Meanwhile, OpenAI's benchmarking AI agents on blockchain bug exploitation, and Zenity Labs released a maliciousness classifier that detects attacks by looking inside the LLM's actual internals instead of just watching what comes out. Here's the question: if AI agents can now make music, find bugs, and operate autonomously faster than we can regulate them, are we prepared for what's next?

Top Stories

1
Measuring AI Agent Autonomy in Practice

Anthropic

Anthropic's analysis of millions of real-world agent interactions shows AI systems operating with increasing autonomy while experienced users develop trust-based oversight strategies, with most deployments remaining low-risk but emerging high-stakes applications in sensitive domains requiring new post-deployment monitoring approaches.

agentsautonomyoversightsafety
2
Making a Song is Now Easier Than Making Toast

Google democratizes music creation through Gemini's Lyria 3 model, enabling anyone to generate original songs via text or image prompts while navigating copyright tensions through watermarking and style-based safeguards rather than artist mimicry.

generative-aimusic-generationgoogledeepmind
3
Gemini's Lyria 3 for Music Generation

Google Blog

Google launches Lyria 3 for consumer music generation in Gemini, emphasizing responsible AI development with watermarking and copyright safeguards while expanding creative tools beyond images and video.

generative-aigooglemusiclyria
4
OpenAI Releases an Open Benchmark Testing Detection, Patching, and Exploitation of Audited Blockchain Bugs

AlphaSignal

OpenAI released EVMbench to measure how well AI agents can find and fix smart contract vulnerabilities, showing significant capability gains but highlighting the importance of using AI defensively as blockchain security risks emerge.

llmagentsblockchaincybersecurity
5
Looking Inside: A Maliciousness Classifier Based on the LLM's Internals

Zenity Labs

Zenity Labs introduces an activation-based maliciousness classifier for AI agents with rigorous out-of-distribution testing and open-source interpretability tools, demonstrating that monitoring LLM internals outperforms traditional input/output filtering approaches.

securityllmagentsinterpretability

Keep Reading

Industry Voices

Enjoyed this issue?

Get daily AI intel delivered to your inbox. No fluff, just the stories that matter.