Deep Visuo-Tactile Learning: Estimation of Tactile Properties from Images (Extended Abstract)
Deep Visuo-Tactile Learning: Estimation of Tactile Properties from Images (Extended Abstract)
Kuniyuki Takahashi, Jethro Tan
Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence
Sister Conferences Best Papers. Pages 4780-4784.
https://doi.org/10.24963/ijcai.2020/665
Estimation of tactile properties from vision, such as slipperiness or roughness, is important to effectively interact with the environment. These tactile properties help humans, as well as robots, decide which actions they should choose and how to perform them. We, therefore, propose a model to estimate the degree of tactile properties from visual perception alone (e.g., the level of slipperiness or roughness). Our method extends an encoder-decoder network, in which the latent variables are visual and tactile features. In contrast to previous works, our method does not require manual labeling, but only RGB images and the corresponding tactile sensor data. All our data is collected with a webcam and tactile sensor mounted on the end-effector of a robot, which strokes the material surfaces. We show that our model generalizes to materials not included in the training data.
Keywords:
Robotics: Learning in Robotics
Robotics: Vision and Perception