Tensor Based Knowledge Transfer Across Skill Categories for Robot Control

Tensor Based Knowledge Transfer Across Skill Categories for Robot Control

Chenyang Zhao, Timothy M. Hospedales, Freek Stulp, Olivier Sigaud

Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence
Main track. Pages 3462-3468. https://doi.org/10.24963/ijcai.2017/484

Advances in hardware and learning for control are enabling robots to perform increasingly dextrous and dynamic control tasks. These skills typically require a prohibitive amount of exploration for reinforcement learning, and so are commonly achieved by imitation learning from manual demonstration. The costly non-scalable nature of manual demonstration has motivated work into skill generalisation, e.g., through contextual policies and options. Despite good results, existing work along these lines is limited to generalising across variants of one skill such as throwing an object to different locations. In this paper we go significantly further and investigate generalisation across qualitatively different classes of control skills. In particular, we introduce a class of neural network controllers that can realise four distinct skill classes: reaching, object throwing, casting, and ball-in-cup. By factorising the weights of the neural network, we are able to extract transferrable latent skills, that enable dramatic acceleration of learning in cross-task transfer. With a suitable curriculum, this allows us to learn challenging dextrous control tasks like ball-in-cup from scratch with pure reinforcement learning.
Keywords:
Machine Learning: Transfer, Adaptation, Multi-task Learning
Robotics and Vision: Developmental Robotics