
Leading AI podcaster who has interviewed Mark Zuckerberg and Tyler Cowen about AI's future.
How this journalist typically writes
Based on 11 scored articles
Dwarkesh Patel as author
Reinforcement learning is significantly more information-inefficient than supervised learning because the bits-per-sample information density is much lower during most RL training, beyond just the higher FLOP requirements per sample.
“Author of "RL is Even More Information Inefficient Than You Thought"”
Referenced in coverage
AI-designed drugs alone cannot significantly accelerate clinical trials because the primary bottlenecks are regulatory, operational, and biological constraints unrelated to drug candidate quality, not lack of intelligence in drug design.
“Interviewed alongside Anthropic's CEO to discuss whether clinical trials will remain a bottleneck in the age of AI, stating that most clinical trials fail because drugs don't work.”
Elon Musk's views on AI alignment are confused, xAI's safety situation is deteriorating with the departure of its safety team, and Musk dismisses safety concerns as performative theater.
“Conducted a 2026 podcast interview with Elon Musk discussing AI alignment, data centers, and safety concerns at xAI.”
Anthropic CEO Dario Amodei reaffirms predictions of extremely rapid AI capability advances including potential 'geniuses in a data center' within years, driven by seven key scaling factors, though company strategy remains conservative relative to stated optimism.
“Hosted a podcast interview with Dario Amodei discussing AI progress and capabilities.”
SpaceX plans to launch a million satellites as orbital data centers for AI, with Elon Musk claiming space will be the most economically compelling location for AI compute within 30-36 months due to energy availability constraints on Earth.
“Interviewed Elon Musk for nearly three hours and challenged him on his plan to build data centers in space.”
The AI revolution has shifted from reinforcement learning agents to large-scale pre-trained models using Transformers and scaling laws, with agents now re-emerging enhanced by pre-training insights, but fundamental questions remain about whether the massive capital investment is justified.
“Leading AI podcaster who has interviewed Mark Zuckerberg and Tyler Cowen about AI's future.”
The AI industry is transitioning from an era focused on scaling model parameters to an era emphasizing research-driven improvements in generalization and safety to ensure AGI development benefits humanity.
“Interviewed Ilya Sutskever about AI topics including SSI's strategy and AGI development.”
Andrej Karpathy argues the 'decade of agents' is more accurate than 2025 being the 'year of agents,' citing insufficient groundwork in intelligence and context handling, with AI agent adoption likely peaking in impact by 2027-2028.
“Podcast host who interviewed Andrej Karpathy about AI agents and AGI timelines.”
Taking The Bitter Lesson seriously means enabling AI to accelerate compute and energy technologies through autonomous science and RL-guided experimentation, rather than pursuing recursive self-improvement through algorithms alone.
“Acknowledged for reviewing drafts of the essay”