Open Access iconOpen Access

ARTICLE

crossmark

Text Simplification Using Transformer and BERT

Sarah Alissa1,*, Mike Wald2

1 College of Computer Science and Information Technology, Imam Abdulrahman Bin Faisal University, Dammam, Saudi Arabia
2 School of Electronics and Computer Science, University of Southampton, Southampton, United Kingdom

* Corresponding Author: Sarah Alissa. Email: email

Computers, Materials & Continua 2023, 75(2), 3479-3495. https://doi.org/10.32604/cmc.2023.033647

Abstract

Reading and writing are the main interaction methods with web content. Text simplification tools are helpful for people with cognitive impairments, new language learners, and children as they might find difficulties in understanding the complex web content. Text simplification is the process of changing complex text into more readable and understandable text. The recent approaches to text simplification adopted the machine translation concept to learn simplification rules from a parallel corpus of complex and simple sentences. In this paper, we propose two models based on the transformer which is an encoder-decoder structure that achieves state-of-the-art (SOTA) results in machine translation. The training process for our model includes three steps: preprocessing the data using a subword tokenizer, training the model and optimizing it using the Adam optimizer, then using the model to decode the output. The first model uses the transformer only and the second model uses and integrates the Bidirectional Encoder Representations from Transformer (BERT) as encoder to enhance the training time and results. The performance of the proposed model using the transformer was evaluated using the Bilingual Evaluation Understudy score (BLEU) and recorded (53.78) on the WikiSmall dataset. On the other hand, the experiment on the second model which is integrated with BERT shows that the validation loss decreased very fast compared with the model without the BERT. However, the BLEU score was small (44.54), which could be due to the size of the dataset so the model was overfitting and unable to generalize well. Therefore, in the future, the second model could involve experimenting with a larger dataset such as the WikiLarge. In addition, more analysis has been done on the model’s results and the used dataset using different evaluation metrics to understand their performance.

Keywords


Cite This Article

APA Style
Alissa, S., Wald, M. (2023). Text simplification using transformer and BERT. Computers, Materials & Continua, 75(2), 3479-3495. https://doi.org/10.32604/cmc.2023.033647
Vancouver Style
Alissa S, Wald M. Text simplification using transformer and BERT. Comput Mater Contin. 2023;75(2):3479-3495 https://doi.org/10.32604/cmc.2023.033647
IEEE Style
S. Alissa and M. Wald, “Text Simplification Using Transformer and BERT,” Comput. Mater. Contin., vol. 75, no. 2, pp. 3479-3495, 2023. https://doi.org/10.32604/cmc.2023.033647



cc Copyright © 2023 The Author(s). Published by Tech Science Press.
This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • 1075

    View

  • 567

    Download

  • 0

    Like

Share Link