A Walk in the Park: Learning to Walk in 20 Minutes With RL

Deep reinforcement learning is a promising approach to learning policies in uncontrolled environments that do not require domain knowledge. Unfortunately, due to sample inefficiency, deep RL applications have primarily focused on simulated environments. In this work, we demonstrate that the recent advancements in machine learning algorithms and libraries combined with a carefully tuned robot controller lead to learning quadruped locomotion in only 20 minutes in the real world. We evaluate our approach on several indoor and outdoor terrains which are known to be challenging for classical model-based controllers. We observe the robot to be able to learn walking gait consistently on all of these terrains. Finally, we evaluate our design decisions in a simulated environment.

This is a cool paper where they taught a robotic dog to walk with only 20 minutes of real world reinforcement learning. I’m impressed with how fast it learned but don’t really understand why their technique worked so well.

A key thing they did compared to prior works:

we use JAX, a modern machine learning framework that performs just-in-time compilation to optimize execution significantly

Then it seems that they focused on achieving faster training, starting from existing architectures and tweaking stuff around to strike a balance between smooth movements and faster training, especially by use of regularization. The details are in section V of the paper, and require reading the relevant papers for the architectures they based their work on to understand what they tweaked (which I didn’t do).

1 Like

I looked into actor-critical models but that doesn’t seem like that big of a deal vs vanilla RL.