← Back to archive
The lawsuit that changes everything: NYT vs. Perplexity

Image via The Neuron

Monday, December 8, 2025

The lawsuit that changes everything: NYT vs. Perplexity

The New York Times is officially going to war with Perplexity over unlicensed content scraping (yikes), signaling that the era of publishers playing nice with AI companies is officially over. Meanwhile, Google's dropping Gemini 3 Pro with wild visual reasoning breakthroughs, and OpenAI is apparently in "Code Red" mode rushing GPT-5.2 to the finish line to keep pace. Oh, and Meta just scooped up Limitless for their wearables ambitions because apparently everyone's racing to own the next computing platform. There's also fascinating research suggesting reinforcement learning might not be the most efficient path for LLM reasoning after all. So here's the real question: when publishers can sue and investors pile in, are we actually solving AI or just funding an arms race?

Top Stories

1
Publishers Are Officially Done Playing Nice With AI

Publishers are escalating legal battles against AI companies like Perplexity over unauthorized content use, strategically combining litigation with licensing negotiations to secure compensation for their work and protect journalism's economic viability.

copyrightai-regulationperplexitypublishers
2
Gemini 3 Pro Advances Visual Reasoning

Google Blog

Gemini 3 Pro advances multimodal AI with breakthrough capabilities in document understanding, spatial reasoning, and video analysis, enabling practical applications across education, healthcare, law, and robotics. This represents Google's next competitive move in the foundation model race with substantially improved visual reasoning over previous generations.

geminigooglemultimodalvision-ai
3
Next ChatGPT Upgrade Imminent Following 'Code Red' Declaration

9to5Mac

OpenAI is rushing GPT-5.2 to market within weeks to counter Google's Gemini 3 breakthrough, signaling accelerated competition and shortened product development cycles in the AI arms race.

openaichatgptllmgoogle
4
There's Got to be a Better Way!

Argmin

Reinforcement learning succeeds in LLM reasoning tasks but remains fundamentally inefficient; the author advocates for certainty equivalence approaches as a potentially superior alternative that could accelerate model training dramatically.

reinforcement-learningllmreasoning-modelsoptimization

Keep Reading

Industry Voices

Enjoyed this issue?

Get daily AI intel delivered to your inbox. No fluff, just the stories that matter.