picoTrans: Using Pictures as Input for Machine Translation on Mobile Devices
Andrew Finch, Wei Song, Kumiko Tanaka-Ishii, Eiichiro Sumita
In this paper we present a novel user interface that integrates two popular approaches to language translation for travelers allowing multimodal communication between the parties involved: the picture-book, in which the user simply points to multiple picture icons representing what they want to say, and the statistical machine translation system that can translate arbitrary word sequences. Our prototype system tightly couples both processes within a translation framework that inherits many of the the positive features of both approaches, while at the same time mitigating their main weaknesses. Our system differs from traditional approaches in that its mode of input is a sequence of pictures, rather than text or speech. Text in the source language is generated automatically, and is used as a detailed representation of the intended meaning. The picture sequence which not only provides a rapid method to communicate basic concepts but also gives a `second opinion' on the machine transition output that catches machine translation errors and allows the users to retry the translation, avoiding misunderstandings.