Correct-and-Memorize: Learning to Translate from Interactive Revisions
Correct-and-Memorize: Learning to Translate from Interactive Revisions
Rongxiang Weng, Hao Zhou, Shujian Huang, Lei Li, Yifan Xia, Jiajun Chen
Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence
Main track. Pages 5255-5263.
https://doi.org/10.24963/ijcai.2019/730
State-of-the-art machine translation models are still not on a par with human translators. Previous work takes human interactions into the neural machine translation process to obtain improved results in target languages. However, not all model--translation errors are equal -- some are critical while others are minor. In the meanwhile, same translation mistakes occur repeatedly in similar context. To solve both issues, we propose CAMIT, a novel method for translating in an interactive environment. Our proposed method works with critical revision instructions, therefore allows human to correct arbitrary words in model-translated sentences. In addition, CAMIT learns from and softly memorizes revision actions based on the context, alleviating the issue of repeating mistakes. Experiments in both ideal and real interactive translation settings demonstrate that our proposed CAMIT enhances machine translation results significantly while requires fewer revision instructions from human compared to previous methods.
Keywords:
Natural Language Processing: Machine Translation
Natural Language Processing: Natural Language Generation
Humans and AI: Human-Computer Interaction