Dual Track Multimodal Automatic Learning through Human-Robot Interaction

Dual Track Multimodal Automatic Learning through Human-Robot Interaction

Shuqiang Jiang, Weiqing Min, Xue Li, Huayang Wang, Jian Sun, Jiaqi Zhou

Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence
Main track. Pages 4485-4491. https://doi.org/10.24963/ijcai.2017/626

Human beings are constantly improving their cognitive ability via automatic learning from the interaction with the environment. Two important aspects of automatic learning are the visual perception and knowledge acquisition. The fusion of these two aspects is vital for improving the intelligence and interaction performance of robots. Many automatic knowledge extraction and recognition methods have been widely studied. However, little work focuses on integrating automatic knowledge extraction and recognition into a unified framework to enable jointly visual perception and knowledge acquisition. To solve this problem, we propose a Dual Track Multimodal Automatic Learning (DTMAL) system, which consists of two components: Hybrid Incremental Learning (HIL) from the vision track and Multimodal Knowledge Extraction (MKE) from the knowledge track. HIL can incrementally improve recognition ability of the system by learning new object samples and new object concepts. MKE is capable of constructing and updating the multimodal knowledge items based on the recognized new objects from HIL and other knowledge by exploring the multimodal signals. The fusion of the two tracks is a mutual promotion process and jointly devote to the dual track learning. We have conducted the experiments through human-machine interaction and the experimental results validated the effectiveness of our proposed system.
Keywords:
Robotics and Vision: Developmental Robotics
Robotics and Vision: Cognitive Robotics
Robotics and Vision: Human Robot Interaction
Robotics and Vision: Robotics and Vision