Reinforcement Learning with Prototypical Representations
Details
Title : Reinforcement Learning with Prototypical Representations Author(s): Denis Yarats, Rob Fergus, Alessandro Lazaric, Lerrel Pinto
Remarks
Summary
What
Learning good representations needs diverse data, and good exploration requiring diverse data needing good representations. This paper tackles this by introducing a task-agnostic pre-training stage which learns a latent space, allowing for better exploration in unseen downstream tasks.
Why
Current solutions are task dependent.
How
The latent space whose elements (prototypes) are learnt through entropy based exploration using a k-NN approximation. These prototypes serve as landmarks for usnseen downstream tasks where they explore better.