← Back to archive
Our brains are outsourcing memory to GPS

Image via Thread Reader

Tuesday, December 2, 2025

Our brains are outsourcing memory to GPS

We're looking at some wild shifts in how AI actually works: OpenAI is scaling code review to catch production bugs with precision, Claude is gaining serious semantic understanding of Excel (goodbye spreadsheet tedium?), and a16z is sounding alarms on infrastructure gaps as we race toward gigawatt-scale data centers. Oh, and turns out GPS is making our brains lazy (yikes) while researchers are basically saying the whole train-test split paradigm needs rethinking for LLMs. Here's the real question: if Claude can replace Excel and AI can review code better than humans, what skills are actually safe to keep learning right now?

Top Stories

1
Weaker Spatial Memory and GPS Navigation

Nature

Heavy GPS reliance weakens spatial memory and cognitive mapping abilities over time, causing people to rely more on rote stimulus-response navigation and encode fewer environmental landmarks, with effects transferable to GPS-free navigation scenarios.

neurosciencecognitive-sciencenavigationmemory
2
A Practical Approach to Verifying Code at Scale

OpenAI

OpenAI deployed an AI code reviewer that prioritizes signal quality over recall, using repo-wide context and execution access to catch critical bugs in both human and AI-generated code while maintaining developer trust and adoption at scale.

openaicode-generationai-safetyalignment
3
Claude Code Over Excel

Thread Reader

New tools like LlamaSheets and OpenAI's function calling fine-tuning are enabling AI coding agents to better understand and analyze structured data through semantic awareness and improved reasoning capabilities, moving beyond naive low-level implementations.

coding-agentsllmopenaistructured-outputs
4
The End of the Train-Test Split

Personal Blog

Traditional machine learning practices fail for LLM-based classification because complex policies require continuous expert involvement and prompt refinement rather than large training datasets and blind testing. Success demands restructuring how policy and engineering teams collaborate to align models with evolving, ambiguous business rules.

llmmachine-learningdata-qualitycontent-moderation
5
a16z's Gigawatt-scale Data Center Timeline

Thread Reader

a16z identifies key 2024 opportunities in AI-native search, LLM-powered compliance, and generative AI infrastructure, emphasizing that developer friction in building with LLMs represents a significant market gap requiring better tooling and frameworks.

llmai-infrastructuregenerative-aia16z

Keep Reading

Industry Voices

Enjoyed this issue?

Get daily AI intel delivered to your inbox. No fluff, just the stories that matter.