Author of "How Can We Get Enough Data to Train a Robot GPT?" in It Can Think Substack
How this journalist typically writes
Chris Paxton as author
Training a robot foundation model equivalent to language model scale (2 trillion tokens) requires 70,000+ robot-years of data, but scaling fleets, simulation, and human video data combined could make this feasible with substantial investment.
“Author of "How Can We Get Enough Data to Train a Robot GPT?" in It Can Think Substack”