Open Access iconOpen Access

ARTICLE

Comparing Fine-Tuning, Zero and Few-Shot Strategies with Large Language Models in Hate Speech Detection in English

Ronghao Pan, José Antonio García-Díaz*, Rafael Valencia-García

Departamento de Informática y Sistemas, Universidad de Murcia, Campus de Espinardo, Murcia, 30100, Spain

* Corresponding Author: José Antonio García-Díaz. Email: email

(This article belongs to the Special Issue: Emerging Artificial Intelligence Technologies and Applications)

Computer Modeling in Engineering & Sciences 2024, 140(3), 2849-2868. https://doi.org/10.32604/cmes.2024.049631

Abstract

Large Language Models (LLMs) are increasingly demonstrating their ability to understand natural language and solve complex tasks, especially through text generation. One of the relevant capabilities is contextual learning, which involves the ability to receive instructions in natural language or task demonstrations to generate expected outputs for test instances without the need for additional training or gradient updates. In recent years, the popularity of social networking has provided a medium through which some users can engage in offensive and harmful online behavior. In this study, we investigate the ability of different LLMs, ranging from zero-shot and few-shot learning to fine-tuning. Our experiments show that LLMs can identify sexist and hateful online texts using zero-shot and few-shot approaches through information retrieval. Furthermore, it is found that the encoder-decoder model called Zephyr achieves the best results with the fine-tuning approach, scoring 86.811% on the Explainable Detection of Online Sexism (EDOS) test-set and 57.453% on the Multilingual Detection of Hate Speech Against Immigrants and Women in Twitter (HatEval) test-set. Finally, it is confirmed that the evaluated models perform well in hate text detection, as they beat the best result in the HatEval task leaderboard. The error analysis shows that contextual learning had difficulty distinguishing between types of hate speech and figurative language. However, the fine-tuned approach tends to produce many false positives.

Keywords


Cite This Article

APA Style
Pan, R., García-Díaz, J.A., Valencia-García, R. (2024). Comparing fine-tuning, zero and few-shot strategies with large language models in hate speech detection in english. Computer Modeling in Engineering & Sciences, 140(3), 2849-2868. https://doi.org/10.32604/cmes.2024.049631
Vancouver Style
Pan R, García-Díaz JA, Valencia-García R. Comparing fine-tuning, zero and few-shot strategies with large language models in hate speech detection in english. Comput Model Eng Sci. 2024;140(3):2849-2868 https://doi.org/10.32604/cmes.2024.049631
IEEE Style
R. Pan, J.A. García-Díaz, and R. Valencia-García, “Comparing Fine-Tuning, Zero and Few-Shot Strategies with Large Language Models in Hate Speech Detection in English,” Comput. Model. Eng. Sci., vol. 140, no. 3, pp. 2849-2868, 2024. https://doi.org/10.32604/cmes.2024.049631



cc Copyright © 2024 The Author(s). Published by Tech Science Press.
This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • 917

    View

  • 375

    Download

  • 0

    Like

Share Link