Wenfeng Zheng1, Siyu Lu1, Zhuohang Cai1, Ruiyang Wang1, Lei Wang2, Lirong Yin2,*
CMES-Computer Modeling in Engineering & Sciences, Vol.139, No.3, pp. 2729-2745, 2024, DOI:10.32604/cmes.2023.046692
- 11 March 2024
Abstract In the field of natural language processing (NLP), there have been various pre-training language models in recent years, with question answering systems gaining significant attention. However, as algorithms, data, and computing power advance, the issue of increasingly larger models and a growing number of parameters has surfaced. Consequently, model training has become more costly and less efficient. To enhance the efficiency and accuracy of the training process while reducing the model volume, this paper proposes a first-order pruning model PAL-BERT based on the ALBERT model according to the characteristics of question-answering (QA) system and language More >