Proposed that only a benevolent artificial superintelligence could coordinate humanity out of competitive coordination problems, and later co-authored AI 2027.
How media typically covers Scott Alexander
Based on 5 scored articles
Scott Alexander as author
OpenAI's contract with the Department of War to provide AI models for 'all lawful use' lacks meaningful safeguards against mass surveillance and autonomous weapons because existing laws have wide loopholes and can be changed unilaterally by the government.
“Author of ""All Lawful Use": Much More Than You Wanted To Know" in Astral Codex Ten”
The characterization of AI as a 'next-token predictor' is a confusion of levels—humans also perform next-token (sense-datum) prediction at the learning level, suggesting this is a job description rather than a fundamental species distinction.
“Author of "Next-Token Predictor Is An AI's Job, Not Its Species" in Astral Codex Ten”
AI safety concerns and regulation are unlikely to cause America to lose the technological race with China because America's 10x compute advantage and superior chip production capacity far outweigh any efficiency losses from safety-focused development.
“Author of "Why AI Safety Won't Make America Lose The Race With China" in Astral Codex Ten”
“Author of "In Search Of AI Psychosis" in Astral Codex Ten”
Referenced in coverage
A social network designed for AI agents to interact with each other has spawned emergent behaviors like bots creating religions, raising questions about bot autonomy versus human direction.
“US blogger who was able to get his bot to participate on Moltbook and noted that ultimately humans can ask bots to post for them.”
Language models function as 'free energy for text,' enabling users to expand questions into answers and democratizing creative text generation in ways that fundamentally change human creativity.
“Referenced in a simile created by Claude about hypothetical essay topics”
AI may solve platform degradation and competitive races-to-the-bottom by raising average user intelligence, enabling people to see through manipulation and demand genuine value.
“Proposed that only a benevolent artificial superintelligence could coordinate humanity out of competitive coordination problems, and later co-authored AI 2027.”