Open Access iconOpen Access

ARTICLE

crossmark

LEGF-DST: LLMs-Enhanced Graph-Fusion Dual-Stream Transformer for Fine-Grained Chinese Malicious SMS Detection

Xin Tong1, Jingya Wang1,*, Ying Yang2, Tian Peng3, Hanming Zhai1, Guangming Ling4

1 School of Information and Cybersecurity, People’s Public Security University of China, Beijing, 100038, China
2 Cyber Investigation Technology Research and Development Center, The Third Research Institute of the Ministry of Public Security, Shanghai, 201204, China
3 Department of Cybersecurity Defense, Beijing Police College, Beijing, 102202, China
4 School of Computer Science, Henan Institute of Engineering, Zhengzhou, 451191, China

* Corresponding Author: Jingya Wang. Email: email

Computers, Materials & Continua 2025, 82(2), 1901-1924. https://doi.org/10.32604/cmc.2024.059018

Abstract

With the widespread use of SMS (Short Message Service), the proliferation of malicious SMS has emerged as a pressing societal issue. While deep learning-based text classifiers offer promise, they often exhibit suboptimal performance in fine-grained detection tasks, primarily due to imbalanced datasets and insufficient model representation capabilities. To address this challenge, this paper proposes an LLMs-enhanced graph fusion dual-stream Transformer model for fine-grained Chinese malicious SMS detection. During the data processing stage, Large Language Models (LLMs) are employed for data augmentation, mitigating dataset imbalance. In the data input stage, both word-level and character-level features are utilized as model inputs, enhancing the richness of features and preventing information loss. A dual-stream Transformer serves as the backbone network in the learning representation stage, complemented by a graph-based feature fusion mechanism. At the output stage, both supervised classification cross-entropy loss and supervised contrastive learning loss are used as multi-task optimization objectives, further enhancing the model’s feature representation. Experimental results demonstrate that the proposed method significantly outperforms baselines on a publicly available Chinese malicious SMS dataset.

Keywords

Transformers; malicious SMS; multi-task learning; large language models

Cite This Article

APA Style
Tong, X., Wang, J., Yang, Y., Peng, T., Zhai, H. et al. (2025). LEGF-DST: LLMs-Enhanced Graph-Fusion Dual-Stream Transformer for Fine-Grained Chinese Malicious SMS Detection. Computers, Materials & Continua, 82(2), 1901–1924. https://doi.org/10.32604/cmc.2024.059018
Vancouver Style
Tong X, Wang J, Yang Y, Peng T, Zhai H, Ling G. LEGF-DST: LLMs-Enhanced Graph-Fusion Dual-Stream Transformer for Fine-Grained Chinese Malicious SMS Detection. Comput Mater Contin. 2025;82(2):1901–1924. https://doi.org/10.32604/cmc.2024.059018
IEEE Style
X. Tong, J. Wang, Y. Yang, T. Peng, H. Zhai, and G. Ling, “LEGF-DST: LLMs-Enhanced Graph-Fusion Dual-Stream Transformer for Fine-Grained Chinese Malicious SMS Detection,” Comput. Mater. Contin., vol. 82, no. 2, pp. 1901–1924, 2025. https://doi.org/10.32604/cmc.2024.059018



cc Copyright © 2025 The Author(s). Published by Tech Science Press.
This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • 596

    View

  • 242

    Download

  • 0

    Like

Share Link