Open Access iconOpen Access

ARTICLE

crossmark

Exploratory Research on Defense against Natural Adversarial Examples in Image Classification

Yaoxuan Zhu, Hua Yang, Bin Zhu*

The State Key Laboratory of Pulsed Power Laser Technology, National University of Defense Technology, Hefei, 230037, China

* Corresponding Author: Bin Zhu. Email: email

Computers, Materials & Continua 2025, 82(2), 1947-1968. https://doi.org/10.32604/cmc.2024.057866

Abstract

The emergence of adversarial examples has revealed the inadequacies in the robustness of image classification models based on Convolutional Neural Networks (CNNs). Particularly in recent years, the discovery of natural adversarial examples has posed significant challenges, as traditional defense methods against adversarial attacks have proven to be largely ineffective against these natural adversarial examples. This paper explores defenses against these natural adversarial examples from three perspectives: adversarial examples, model architecture, and dataset. First, it employs Class Activation Mapping (CAM) to visualize how models classify natural adversarial examples, identifying several typical attack patterns. Next, various common CNN models are analyzed to evaluate their susceptibility to these attacks, revealing that different architectures exhibit varying defensive capabilities. The study finds that as the depth of a network increases, its defenses against natural adversarial examples strengthen. Lastly, Finally, the impact of dataset class distribution on the defense capability of models is examined, focusing on two aspects: the number of classes in the training set and the number of predicted classes. This study investigates how these factors influence the model’s ability to defend against natural adversarial examples. Results indicate that reducing the number of training classes enhances the model’s defense against natural adversarial examples. Additionally, under a fixed number of training classes, some CNN models show an optimal range of predicted classes for achieving the best defense performance against these adversarial examples.

Keywords

Image classification; convolutional neural network; natural adversarial example; data set; defense against adversarial examples

Cite This Article

APA Style
Zhu, Y., Yang, H., Zhu, B. (2025). Exploratory research on defense against natural adversarial examples in image classification. Computers, Materials & Continua, 82(2), 1947–1968. https://doi.org/10.32604/cmc.2024.057866
Vancouver Style
Zhu Y, Yang H, Zhu B. Exploratory research on defense against natural adversarial examples in image classification. Comput Mater Contin. 2025;82(2):1947–1968. https://doi.org/10.32604/cmc.2024.057866
IEEE Style
Y. Zhu, H. Yang, and B. Zhu, “Exploratory Research on Defense against Natural Adversarial Examples in Image Classification,” Comput. Mater. Contin., vol. 82, no. 2, pp. 1947–1968, 2025. https://doi.org/10.32604/cmc.2024.057866



cc Copyright © 2025 The Author(s). Published by Tech Science Press.
This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • 384

    View

  • 205

    Download

  • 0

    Like

Share Link