Differentiated Attentive Representation Learning for Sentence Classification

Differentiated Attentive Representation Learning for Sentence Classification

Qianrong Zhou, Xiaojie Wang, Xuan Dong

Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence
Main track. Pages 4630-4636. https://doi.org/10.24963/ijcai.2018/644

Attention-based models have shown to be effective in learning representations for sentence classification. They are typically equipped with multi-hop attention mechanism. However, existing multi-hop models still suffer from the problem of paying much attention to the most frequently noticed words, which might not be important to classify the current sentence. And there is a lack of explicitly effective way that helps the attention to be shifted out of a wrong part in the sentence. In this paper, we alleviate this problem by proposing a differentiated attentive learning model. It is composed of two branches of attention subnets and an example discriminator. An explicit signal with the loss information of the first attention subnet is passed on to the second one to drive them to learn different attentive preference. The example discriminator then selects the suitable attention subnet for sentence classification. Experimental results on real and synthetic datasets demonstrate the effectiveness of our model.
Keywords:
Natural Language Processing: Sentiment Analysis and Text Mining
Natural Language Processing: Text Classification