Open Access
ARTICLE
Modeling Multi-Targets Sentiment Classification via Graph Convolutional Networks and Auxiliary Relation
1 Department of Computer Science, Chengdu University of Information Technology, Chengdu, 610225, China.
2 Central Washington University, Des Moines, WA 98198, USA.
* Corresponding Author: Zhengjie Gao. Email: .
Computers, Materials & Continua 2020, 64(2), 909-923. https://doi.org/10.32604/cmc.2020.09913
Received 27 January 2020; Accepted 01 March 2020; Issue published 10 June 2020
Abstract
Existing solutions do not work well when multi-targets coexist in a sentence. The reason is that the existing solution is usually to separate multiple targets and process them separately. If the original sentence has N target, the original sentence will be repeated for N times, and only one target will be processed each time. To some extent, this approach degenerates the fine-grained sentiment classification task into the sentencelevel sentiment classification task, and the research method of processing the target separately ignores the internal relation and interaction between the targets. Based on the above considerations, we proposes to use Graph Convolutional Network (GCN) to model and process multi-targets appearing in sentences at the same time based on the positional relationship, and then to construct a graph of the sentiment relationship between targets based on the difference of the sentiment polarity between target words. In addition to the standard target-dependent sentiment classification task, an auxiliary node relation classification task is constructed. Experiments demonstrate that our model achieves new comparable performance on the benchmark datasets: SemEval-2014 Task 4, i.e., reviews for restaurants and laptops. Furthermore, the method of dividing the target words into isolated individuals has disadvantages, and the multi-task learning model is beneficial to enhance the feature extraction ability and expression ability of the model.Keywords
Cite This Article
Citations
This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.