← Back to archive
Creators are finally fighting back on AI training

Image via Unknown

Wednesday, November 5, 2025

Creators are finally fighting back on AI training

The copyright crackdown is heating up (yikes) with Japan's studios demanding OpenAI stop training on their work without permission, while Shopify is absolutely crushing it with AI traffic up 7x and orders surging 11x since January. Meanwhile, Alibaba's new Qwen3-Max-Thinking is already hitting perfect reasoning scores mid-training, and Gemini's retention numbers are looking genuinely competitive against ChatGPT. On the technical side, folks are moving beyond standard LLMs into attention hybrids and recursive transformers, though vector search is apparently having some rough production moments. If your startup relies on scraped training data, are you worried?

Top Stories

1
Creators Are Done Letting AI Borrow Their Style

Japanese content creators and their representatives are demanding OpenAI stop using their copyrighted works to train AI models, reflecting growing global friction over how generative AI companies obtain training data and the ease with which users can now recreate copyrighted styles and characters.

copyrightopenaigenerative-aiintellectual-property
2
Shopify Says AI Traffic is Up 7x Since January, AI-Driven Orders Are Up 11x

TechCrunch

Shopify is experiencing explosive growth in AI-driven shopping, with AI traffic up 7x and orders up 11x, signaling a major shift toward agentic commerce as the company positions AI as central to its platform strategy.

ai-agentse-commerceshopifyopenai
3
Gemini's Retention Data is Out, and It Looks Better Than Even ChatGPT!

Thread Reader

Vector search, despite widespread adoption, has fundamental flaws in production retrieval compared to classical BM25, while Kimi K2's transparent training report offers rare insights into frontier model development methodology and scaling approaches.

vector-searchllmopen-sourcebenchmarks
4
Beyond Standard LLMs

Sebastian Raschka's Magazine

Beyond standard LLMs, emerging architectures like linear attention hybrids, text diffusion models, code world models, and small recursive transformers each offer distinct trade-offs—trading efficiency for capability or specialization for generalization—while traditional transformers remain proven and optimal for most current applications.

llmtransformer-architectureattention-mechanismslinear-attention
5
Alibaba Launches Qwen3-Max-Thinking Early, a Mid-Training Model Already Achieving Perfect Reasoning Benchmark Results

Alibaba

Alibaba's Qwen3-Max-Thinking early preview achieves perfect scores on elite reasoning benchmarks while still in training, signaling major progress in Chinese LLM competition and test-time compute scaling.

llmalibabareasoningbenchmarks

Keep Reading

Industry Voices

Enjoyed this issue?

Get daily AI intel delivered to your inbox. No fluff, just the stories that matter.