Sunqiang Hu1, Xiaoyu Li1, Yu Deng1,*, Yu Peng1, Bin Lin2, Shan Yang3
CMC-Computers, Materials & Continua, Vol.69, No.1, pp. 145-158, 2021, DOI:10.32604/cmc.2021.017441
- 04 June 2021
Abstract In recent years, many text summarization models based on pre-training methods have achieved very good results. However, in these text summarization models, semantic deviations are easy to occur between the original input representation and the representation that passed multi-layer encoder, which may result in inconsistencies between the generated summary and the source text content. The Bidirectional Encoder Representations from Transformers (BERT) improves the performance of many tasks in Natural Language Processing (NLP). Although BERT has a strong capability to encode context, it lacks the fine-grained semantic representation. To solve these two problems, we proposed a… More >