
Argues that open models will never catch up with closed ones but believes they will be the engine for the next ten years of AI research.
How media typically covers Nathan Lambert
Based on 8 scored articles
Nathan Lambert as author
Claude Opus 4.6 and GPT-5.3-Codex represent a shift toward practical model assessment beyond benchmarks, with Opus maintaining an edge in usability for coding tasks while Codex 5.3 shows marked improvement and narrowing competitive gap.
“Author of "Opus 4.6, Codex 5.3, and the Post-Benchmark Era" in Interconnects”
Using multiple specialized AI models for different tasks—GPT 5.2 for research, Claude 4.5 Opus for coding and feedback, and Gemini 3 Pro for general knowledge—is the optimal strategy for maximizing AI utility in 2026.
“Author of "Use Multiple Models" in Interconnects”
Chinese open models, led by Qwen, have achieved dominant adoption metrics across the global AI ecosystem in 2025-2026, while Western models like Llama remain popular but stagnant, though OpenAI's GPT-OSS shows early promise for restoring Western competitiveness.
“Author of "8 Plots That Explain the State of Open Models" in Interconnects”
Current language model training methods, particularly RLHF and preference-based optimization, structurally inhibit the ability to generate high-quality sustained prose despite superhuman capabilities in other domains.
“Author of "Why AI Writing Is Still So Mid" in Interconnects”
Kimi K2 Thinking from Moonshot AI represents the closest open-source models have come to closed-frontier performance, with Chinese labs releasing models significantly faster than Western competitors while gaining ground on key benchmarks.
“Author of "5 Thoughts on Kimi K2 Thinking" in Interconnects”
Coding is the last tractable general domain where frontier AI models show consistent meaningful improvement, positioning code agents as the epicenter of progress toward general AI agents.
“Author of "Coding as the Epicenter of AI Progress and the Path to General Agents" in Interconnects”
Chinese open model builders including DeepSeek, Qwen, and others are releasing frontier-quality open models at high cadence, with DeepSeek's V3 and R1 representing the biggest AI stories of 2025 through permissive licensing and transparent reasoning chains.
“Co-author of the article ranking Chinese open model builders”
Directly quoted in these articles
The AI safety debate has shifted from adversarial 'doomers vs accelerationists' framing to unified coordination on technical solutions, driven by constraints from geopolitical competition, economic dependencies, and existential stakes.
“Quoted describing the evolution of the AI safety debate from 'doomers vs accelerationists' framing to unified coordination.”
Referenced in coverage
Open models will never catch up with closed frontier systems in raw capability, but this is the wrong framing—open models serve as the engine for exploratory AI research that companies cannot nurture.
“Argues that open models will never catch up with closed ones but believes they will be the engine for the next ten years of AI research.”