Open Access
ARTICLE
An Intelligent Secure Adversarial Examples Detection Scheme in Heterogeneous Complex Environments
1 College of Information Science and Engineering, Hunan Women’s University, Changsha, 410138, China
2 School of Mathematics and Statistics, Hunan First Normal University, Changsha, 410138, China
3 School of Computer & Communication Engineering, Changsha University of Science & Technology, Changsha, 410114, China
4 Department of Computing Science, University of Aberdeen, Aberdeen, AB24 3FX, UK
5 Department of Computer Science, Community College, King Saud University, Riyadh, 11437, Saudi Arabia
6 Department of Computer Science, Faculty of Computers and Informatics, Zagazig University, Zagazig, 44511, Egypt
* Corresponding Author: Xiangqi Wang. Email:
Computers, Materials & Continua 2023, 76(3), 3859-3876. https://doi.org/10.32604/cmc.2023.041346
Received 19 April 2023; Accepted 19 July 2023; Issue published 08 October 2023
Abstract
Image-denoising techniques are widely used to defend against Adversarial Examples (AEs). However, denoising alone cannot completely eliminate adversarial perturbations. The remaining perturbations tend to amplify as they propagate through deeper layers of the network, leading to misclassifications. Moreover, image denoising compromises the classification accuracy of original examples. To address these challenges in AE defense through image denoising, this paper proposes a novel AE detection technique. The proposed technique combines multiple traditional image-denoising algorithms and Convolutional Neural Network (CNN) network structures. The used detector model integrates the classification results of different models as the input to the detector and calculates the final output of the detector based on a machine-learning voting algorithm. By analyzing the discrepancy between predictions made by the model on original examples and denoised examples, AEs are detected effectively. This technique reduces computational overhead without modifying the model structure or parameters, effectively avoiding the error amplification caused by denoising. The proposed approach demonstrates excellent detection performance against mainstream AE attacks. Experimental results show outstanding detection performance in well-known AE attacks, including Fast Gradient Sign Method (FGSM), Basic Iteration Method (BIM), DeepFool, and Carlini & Wagner (C&W), achieving a 94% success rate in FGSM detection, while only reducing the accuracy of clean examples by 4%.Keywords
Cite This Article
This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.