- RAD was released on preprint repository arXiv
- RAD achieved state-of-the-art results on common benchmarks as per the researchers
As per a report by Venturebeat, a group of University of California, Berkeley researchers have open-sourced Reinforcement Learning with Augmented Data (RAD). The report said that in an accompanying paper, the authors said that this module can improve any existing reinforcement learning algorithm and RAD can achieve better compute and data efficiency than Google AI’s PlaNet and other cutting-edge algorithms like DeepMind’s Dreamer and SLAC from UC Berkeley and DeepMind.
The report said that the researchers said that RAD achieved state-of-the-art results on common benchmarks and matched or beat every baseline in terms of performance and data efficiency across 15 DeepMind control environments. It added that it did this in part by applying data augmentations for visual observations. The coauthors of the paper on RAD include Michael “Misha” Laskin, Kimin Lee, and Berkeley AI Research codirector and Covariant founder Pieter Abbee. As per the report, RAD was released on preprint repository arXiv.
Improve the data-efficiency
The report said that the paper said that for the first time, it can be shown that data augmentations alone can significantly improve the data-efficiency and generalization of RL methods operating from pixels. It can be done without any changes to the underlying RL algorithm, on the DeepMind Control Suite and the OpenAI ProcGen benchmarks, respectively. It added that by using multiple augmented views of the same data point as input, CNNs are forced to learn consistencies in their internal representations. This results in a visual representation that improves generalisation, data-efficiency, and transfer learning.