#10 Summary
Humanoid Locomotion as Next Token Prediction
What.
They trained causal decoder to predict next action (and observation)
Data.
Normally, you'd need a bunch of data that shows both what the robot sees (observations) and what it does (actions).
But that's tough to get. The authors used videos - some with the actions laid out and some without. This way, the robot can learn even from videos where we don't know what the actions were supposed to be.
In case there’re not action, they replace with [MASK] token. Very simple and straightforward
My thoughts
- I love how this paper makes the robot predict its next move and what it'll see next. It's like it's planning its future steps.
- For the robot to guess what's going to happen next accurately, it needs to have a mini understanding of physics and how the world works. This concept, called a 'world model,' is super intriguing.
- What's next? You can add condition with cross attention and train to understand commands, like VIMA paper.
More examples