Anman Zhang1, Bohan Li1, 2, 3, *, Wenhuan Wang1, Shuo Wan1, Weitong Chen4
CMC-Computers, Materials & Continua, Vol.63, No.3, pp. 1499-1514, 2020, DOI:10.32604/cmc.2020.09962
- 30 April 2020
Abstract Active learning has been widely utilized to reduce the labeling cost of
supervised learning. By selecting specific instances to train the model, the performance of
the model was improved within limited steps. However, rare work paid attention to the
effectiveness of active learning on it. In this paper, we proposed a deep active learning
model with bidirectional encoder representations from transformers (BERT) for text
classification. BERT takes advantage of the self-attention mechanism to integrate
contextual information, which is beneficial to accelerate the convergence of training. As
for the process of active learning, we design an More >