Open Access iconOpen Access

ARTICLE

crossmark

Practical Adversarial Attacks Imperceptible to Humans in Visual Recognition

Donghyeok Park1, Sumin Yeon2, Hyeon Seo2, Seok-Jun Buu2, Suwon Lee2,*

1 Aircraft Final Assembly Manufacturing Engineering Team, Korea Aerospace Industries, Sacheon-si, 52529, Republic of Korea
2 Department of Computer Science and Engineering, Gyeongsang National University, Jinju-si, 52828, Republic of Korea

* Corresponding Author: Suwon Lee. Email: email

Computer Modeling in Engineering & Sciences 2025, 142(3), 2725-2737. https://doi.org/10.32604/cmes.2025.061732

Abstract

Recent research on adversarial attacks has primarily focused on white-box attack techniques, with limited exploration of black-box attack methods. Furthermore, in many black-box research scenarios, it is assumed that the output label and probability distribution can be observed without imposing any constraints on the number of attack attempts. Unfortunately, this disregard for the real-world practicality of attacks, particularly their potential for human detectability, has left a gap in the research landscape. Considering these limitations, our study focuses on using a similar color attack method, assuming access only to the output label, limiting the number of attack attempts to 100, and subjecting the attacks to human perceptibility testing. Through this approach, we demonstrated the effectiveness of black box attack techniques in deceiving models and achieved a success rate of 82.68% in deceiving humans. This study emphasizes the significance of research that addresses the challenge of deceiving both humans and models, highlighting the importance of real-world applicability.

Keywords

Adversarial attacks; image recognition; information security

Cite This Article

APA Style
Park, D., Yeon, S., Seo, H., Buu, S., Lee, S. (2025). Practical adversarial attacks imperceptible to humans in visual recognition. Computer Modeling in Engineering & Sciences, 142(3), 2725–2737. https://doi.org/10.32604/cmes.2025.061732
Vancouver Style
Park D, Yeon S, Seo H, Buu S, Lee S. Practical adversarial attacks imperceptible to humans in visual recognition. Comput Model Eng Sci. 2025;142(3):2725–2737. https://doi.org/10.32604/cmes.2025.061732
IEEE Style
D. Park, S. Yeon, H. Seo, S. Buu, and S. Lee, “Practical Adversarial Attacks Imperceptible to Humans in Visual Recognition,” Comput. Model. Eng. Sci., vol. 142, no. 3, pp. 2725–2737, 2025. https://doi.org/10.32604/cmes.2025.061732



cc Copyright © 2025 The Author(s). Published by Tech Science Press.
This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • 288

    View

  • 213

    Download

  • 0

    Like

Share Link