Wrote about how AIs are more than just next-token predictors, discussing fine-tuning and RLHF.
How this journalist typically writes
Kelsey Piper as author
Large language models show remarkably consistent liberal values across different languages despite being trained predominantly on English text, suggesting the Sapir-Whorf hypothesis does not apply to AI systems.
“Author of "Do AIs Think Differently in Different Languages?"”
Referenced in coverage
The characterization of AI as a 'next-token predictor' is a confusion of levels—humans also perform next-token (sense-datum) prediction at the learning level, suggesting this is a job description rather than a fundamental species distinction.
“Wrote about how AIs are more than just next-token predictors, discussing fine-tuning and RLHF.”