
Image via Thread Reader
Tuesday, December 2, 2025
Our brains are outsourcing memory to GPS
We're looking at some wild shifts in how AI actually works: OpenAI is scaling code review to catch production bugs with precision, Claude is gaining serious semantic understanding of Excel (goodbye spreadsheet tedium?), and a16z is sounding alarms on infrastructure gaps as we race toward gigawatt-scale data centers. Oh, and turns out GPS is making our brains lazy (yikes) while researchers are basically saying the whole train-test split paradigm needs rethinking for LLMs. Here's the real question: if Claude can replace Excel and AI can review code better than humans, what skills are actually safe to keep learning right now?

Image via Thread Reader
Top Stories
Nature
Heavy GPS reliance weakens spatial memory and cognitive mapping abilities over time, causing people to rely more on rote stimulus-response navigation and encode fewer environmental landmarks, with effects transferable to GPS-free navigation scenarios.
OpenAI
OpenAI deployed an AI code reviewer that prioritizes signal quality over recall, using repo-wide context and execution access to catch critical bugs in both human and AI-generated code while maintaining developer trust and adoption at scale.
Thread Reader
New tools like LlamaSheets and OpenAI's function calling fine-tuning are enabling AI coding agents to better understand and analyze structured data through semantic awareness and improved reasoning capabilities, moving beyond naive low-level implementations.
Personal Blog
Traditional machine learning practices fail for LLM-based classification because complex policies require continuous expert involvement and prompt refinement rather than large training datasets and blind testing. Success demands restructuring how policy and engineering teams collaborate to align models with evolving, ambiguous business rules.
Thread Reader
a16z identifies key 2024 opportunities in AI-native search, LLM-powered compliance, and generative AI infrastructure, emphasizing that developer friction in building with LLMs represents a significant market gap requiring better tooling and frameworks.
Keep Reading
Industry Voices
Enjoyed this issue?
Get daily AI intel delivered to your inbox. No fluff, just the stories that matter.