Open Access
ARTICLE
Deep Feature Fusion Model for Sentence Semantic Matching
School of Computer Science and Technology, QiLu University of Technology (Shandong Academy of Sciences), Jinan, 250353, China.
oOh! Media, Sydney, NSW, 2060, Australia.
Centre of Artificial Intelligence, University of Technology Sydney, Sydney, NSW, 2007, Australia.
*Corresponding Author: Wenpeng Lu. Email: .
Computers, Materials & Continua 2019, 61(2), 601-616. https://doi.org/10.32604/cmc.2019.06045
Abstract
Sentence semantic matching (SSM) is a fundamental research in solving natural language processing tasks such as question answering and machine translation. The latest SSM research benefits from deep learning techniques by incorporating attention mechanism to semantically match given sentences. However, how to fully capture the semantic context without losing significant features for sentence encoding is still a challenge. To address this challenge, we propose a deep feature fusion model and integrate it into the most popular deep learning architecture for sentence matching task. The integrated architecture mainly consists of embedding layer, deep feature fusion layer, matching layer and prediction layer. In addition, we also compare the commonly used loss function, and propose a novel hybrid loss function integrating MSE and cross entropy together, considering confidence interval and threshold setting to preserve the indistinguishable instances in training process. To evaluate our model performance, we experiment on two real world public data sets: LCQMC and Quora. The experiment results demonstrate that our model outperforms the most existing advanced deep learning models for sentence matching, benefited from our enhanced loss function and deep feature fusion model for capturing semantic context.Keywords
Cite This Article
Citations
This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.