Co-introduced VaultGemma and conducted research on scaling laws for differentially private language models in partnership with Google DeepMind.
How media typically covers Ryan McKenna
Ryan McKenna as author
Scaling laws for differentially private language model training accurately model compute-privacy-utility tradeoffs and identify optimal training configurations for privacy-preserving LLM development.
“Author of "Tool-space interference in the MCP era: Designing for agent compatibility at sca" on arXiv”
Referenced in coverage
Google released VaultGemma, the largest open-weight language model (1B parameters) trained from scratch with differential privacy, alongside new scaling laws quantifying compute-privacy-utility trade-offs.
“Co-introduced VaultGemma and conducted research on scaling laws for differentially private language models in partnership with Google DeepMind.”