Open Access iconOpen Access

ARTICLE

crossmark

DPAL-BERT: A Faster and Lighter Question Answering Model

by Lirong Yin1, Lei Wang1, Zhuohang Cai2, Siyu Lu2,*, Ruiyang Wang2, Ahmed AlSanad3, Salman A. AlQahtani3, Xiaobing Chen4, Zhengtong Yin5, Xiaolu Li6, Wenfeng Zheng2,3,*

1 Department of Geography and Anthropology, Louisiana State University, Baton Rouge, LA 70803, USA
2 School of Automation, University of Electronic Science and Technology of China, Chengdu, 610054, China
3 College of Computer and Information Sciences, King Saud University, Riyadh, 11574, Saudi Arabia
4 School of Electrical and Computer Engineering, Louisiana State University, Baton Rouge, LA 70803, USA
5 College of Resources and Environmental Engineering, Guizhou University, Guiyang, 550025, China
6 School of Geographical Sciences, Southwest University, Chongqing, 400715, China

* Corresponding Authors: Siyu Lu. Email: email; Wenfeng Zheng. Email: email

(This article belongs to the Special Issue: Emerging Artificial Intelligence Technologies and Applications)

Computer Modeling in Engineering & Sciences 2024, 141(1), 771-786. https://doi.org/10.32604/cmes.2024.052622

Abstract

Recent advancements in natural language processing have given rise to numerous pre-training language models in question-answering systems. However, with the constant evolution of algorithms, data, and computing power, the increasing size and complexity of these models have led to increased training costs and reduced efficiency. This study aims to minimize the inference time of such models while maintaining computational performance. It also proposes a novel Distillation model for PAL-BERT (DPAL-BERT), specifically, employs knowledge distillation, using the PAL-BERT model as the teacher model to train two student models: DPAL-BERT-Bi and DPAL-BERT-C. This research enhances the dataset through techniques such as masking, replacement, and n-gram sampling to optimize knowledge transfer. The experimental results showed that the distilled models greatly outperform models trained from scratch. In addition, although the distilled models exhibit a slight decrease in performance compared to PAL-BERT, they significantly reduce inference time to just 0.25% of the original. This demonstrates the effectiveness of the proposed approach in balancing model performance and efficiency.

Keywords


Cite This Article

APA Style
Yin, L., Wang, L., Cai, Z., Lu, S., Wang, R. et al. (2024). DPAL-BERT: A faster and lighter question answering model. Computer Modeling in Engineering & Sciences, 141(1), 771-786. https://doi.org/10.32604/cmes.2024.052622
Vancouver Style
Yin L, Wang L, Cai Z, Lu S, Wang R, AlSanad A, et al. DPAL-BERT: A faster and lighter question answering model. Comput Model Eng Sci. 2024;141(1):771-786 https://doi.org/10.32604/cmes.2024.052622
IEEE Style
L. Yin et al., “DPAL-BERT: A Faster and Lighter Question Answering Model,” Comput. Model. Eng. Sci., vol. 141, no. 1, pp. 771-786, 2024. https://doi.org/10.32604/cmes.2024.052622



cc Copyright © 2024 The Author(s). Published by Tech Science Press.
This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • 796

    View

  • 252

    Download

  • 0

    Like

Share Link