Author of "Meta's Agent Learning" on arXiv
How media typically covers Kai Zhang
Kai Zhang as author
Cross-domain generalization for RL-trained LLM agents is primarily driven by state information richness and planning complexity rather than domain realism, and can be improved through strategic state randomization and step-by-step reasoning during training.
“Author of "Paying Less Generalization Tax: A Cross-Domain Generalization Study of RL Traini" on arXiv”
Meta proposes 'early experience' learning paradigm combining implicit world modeling and self-reflection to enable language agents to improve from their own interaction data without requiring reward signals or expert demonstrations.
“Author of "Meta's Agent Learning" on arXiv”