Open Access iconOpen Access

ARTICLE

crossmark

An Intelligent Secure Adversarial Examples Detection Scheme in Heterogeneous Complex Environments

by Weizheng Wang1,3, Xiangqi Wang2,*, Xianmin Pan1, Xingxing Gong3, Jian Liang3, Pradip Kumar Sharma4, Osama Alfarraj5, Wael Said6

1 College of Information Science and Engineering, Hunan Women’s University, Changsha, 410138, China
2 School of Mathematics and Statistics, Hunan First Normal University, Changsha, 410138, China
3 School of Computer & Communication Engineering, Changsha University of Science & Technology, Changsha, 410114, China
4 Department of Computing Science, University of Aberdeen, Aberdeen, AB24 3FX, UK
5 Department of Computer Science, Community College, King Saud University, Riyadh, 11437, Saudi Arabia
6 Department of Computer Science, Faculty of Computers and Informatics, Zagazig University, Zagazig, 44511, Egypt

* Corresponding Author: Xiangqi Wang. Email: email

Computers, Materials & Continua 2023, 76(3), 3859-3876. https://doi.org/10.32604/cmc.2023.041346

Abstract

Image-denoising techniques are widely used to defend against Adversarial Examples (AEs). However, denoising alone cannot completely eliminate adversarial perturbations. The remaining perturbations tend to amplify as they propagate through deeper layers of the network, leading to misclassifications. Moreover, image denoising compromises the classification accuracy of original examples. To address these challenges in AE defense through image denoising, this paper proposes a novel AE detection technique. The proposed technique combines multiple traditional image-denoising algorithms and Convolutional Neural Network (CNN) network structures. The used detector model integrates the classification results of different models as the input to the detector and calculates the final output of the detector based on a machine-learning voting algorithm. By analyzing the discrepancy between predictions made by the model on original examples and denoised examples, AEs are detected effectively. This technique reduces computational overhead without modifying the model structure or parameters, effectively avoiding the error amplification caused by denoising. The proposed approach demonstrates excellent detection performance against mainstream AE attacks. Experimental results show outstanding detection performance in well-known AE attacks, including Fast Gradient Sign Method (FGSM), Basic Iteration Method (BIM), DeepFool, and Carlini & Wagner (C&W), achieving a 94% success rate in FGSM detection, while only reducing the accuracy of clean examples by 4%.

Keywords


Cite This Article

APA Style
Wang, W., Wang, X., Pan, X., Gong, X., Liang, J. et al. (2023). An intelligent secure adversarial examples detection scheme in heterogeneous complex environments. Computers, Materials & Continua, 76(3), 3859-3876. https://doi.org/10.32604/cmc.2023.041346
Vancouver Style
Wang W, Wang X, Pan X, Gong X, Liang J, Sharma PK, et al. An intelligent secure adversarial examples detection scheme in heterogeneous complex environments. Comput Mater Contin. 2023;76(3):3859-3876 https://doi.org/10.32604/cmc.2023.041346
IEEE Style
W. Wang et al., “An Intelligent Secure Adversarial Examples Detection Scheme in Heterogeneous Complex Environments,” Comput. Mater. Contin., vol. 76, no. 3, pp. 3859-3876, 2023. https://doi.org/10.32604/cmc.2023.041346



cc Copyright © 2023 The Author(s). Published by Tech Science Press.
This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • 528

    View

  • 351

    Download

  • 0

    Like

Share Link