Open Access iconOpen Access

ARTICLE

crossmark

CDR2IMG: A Bridge from Text to Image in Telecommunication Fraud Detection

Zhen Zhen1, Jian Gao1,2,*

1 School of Information Network Security, People’s Public Security University of China, Beijing, 100038, China
2 Key Laboratory of Safety Precautions and Risk Assessment, Ministry of Public Security, Beijing, 102623, China

* Corresponding Author: Jian Gao. Email: email

Computer Systems Science and Engineering 2023, 47(1), 955-973. https://doi.org/10.32604/csse.2023.039525

Abstract

Telecommunication fraud has run rampant recently worldwide. However, previous studies depend highly on expert knowledge-based feature engineering to extract behavior information, which cannot adapt to the fast-changing modes of fraudulent subscribers. Therefore, we propose a new taxonomy that needs no hand-designed features but directly takes raw Call Detail Records (CDR) data as input for the classifier. Concretely, we proposed a fraud detection method using a convolutional neural network (CNN) by taking CDR data as images and applying computer vision techniques like image augmentation. Comprehensive experiments on the real-world dataset from the 2020 Digital Sichuan Innovation Competition show that our proposed method outperforms the classic methods in many metrics with excellent stability in both the changes of quantity and the balance of samples. Compared with the state-of-the-art method, the proposed method has achieved about 89.98% F1-score and 92.93% AUC, improving 2.97% and 0.48%, respectively. With the augmentation technique, the model’s performance can be further enhanced by a 91.09% F1-score and a 94.49% AUC respectively. Beyond telecommunication fraud detection, our method can also be extended to other text datasets to automatically discover new features in the view of computer vision and its powerful methods.

Keywords


Cite This Article

Z. Zhen and J. Gao, "Cdr2img: a bridge from text to image in telecommunication fraud detection," Computer Systems Science and Engineering, vol. 47, no.1, pp. 955–973, 2023. https://doi.org/10.32604/csse.2023.039525



cc This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • 654

    View

  • 481

    Download

  • 3

    Like

Share Link