Abstract
Learning to Walk through Imitation
Rawichote Chalodhorn, David B. Grimes, Keith Grochow, and Rajesh P. N. Rao
Programming a humanoid robot to walk is a chal-lenging problem in robotics. Traditional ap-proaches rely heavily on prior knowledge of the robot's physical parameters to devise sophisticated control algorithms for generating a stable gait. In this paper, we provide, to our knowledge, the first demonstration that a humanoid robot can learn to walk directly by imitating a human gait obtained from motion capture (mocap) data. Training using human motion capture is an intuitive and flexible approach to programming a robot but direct usage of mocap data usually results in dynamically un-stable motion. Furthermore, optimization using mocap data in the humanoid full-body joint-space is typically intractable. We propose a new model-free approach to tractable imitation-based learning in humanoids. We represent kinematic information from human motion capture in a low dimensional subspace and map motor commands in this low-dimensional space to sensory feedback to learn a predictive dynamic model. This model is used within an optimization framework to estimate op-timal motor commands that satisfy the initial kine-matic constraints as best as possible while at the same time generating dynamically stable motion. We demonstrate the viability of our approach by providing examples of dynamically stable walking in a humanoid learned from mocap data using both a dynamic simulator and a real humanoid robot.