Table of Content

Open Access iconOpen Access

ARTICLE

crossmark

Defend Against Adversarial Samples by Using Perceptual Hash

Changrui Liu1, Dengpan Ye1, *, Yueyun Shang2, Shunzhi Jiang1, Shiyu Li1, Yuan Mei1, Liqiang Wang3

1 Key Laboratory of Aerospace Information Security and Trusted Computing, Ministry of Education, School of Cyber Science and Engineering, Wuhan University, Wuhan, 430072, China.
2 School of Mathematics and Statistics, South Central University for Nationalities, Wuhan, 430074, China.
3 University of Central Florida, 4000 Central Florida Blvd. Orlando, Florida, 32816, USA.

* Corresponding Author: Dengpan Ye. Email: email.

Computers, Materials & Continua 2020, 62(3), 1365-1386. https://doi.org/10.32604/cmc.2020.07421

Abstract

Image classifiers that based on Deep Neural Networks (DNNs) have been proved to be easily fooled by well-designed perturbations. Previous defense methods have the limitations of requiring expensive computation or reducing the accuracy of the image classifiers. In this paper, we propose a novel defense method which based on perceptual hash. Our main goal is to destroy the process of perturbations generation by comparing the similarities of images thus achieve the purpose of defense. To verify our idea, we defended against two main attack methods (a white-box attack and a black-box attack) in different DNN-based image classifiers and show that, after using our defense method, the attack-success-rate for all DNN-based image classifiers decreases significantly. More specifically, for the white-box attack, the attack-success-rate is reduced by an average of 36.3%. For the black-box attack, the average attack-successrate of targeted attack and non-targeted attack has been reduced by 72.8% and 76.7% respectively. The proposed method is a simple and effective defense method and provides a new way to defend against adversarial samples.

Keywords


Cite This Article

APA Style
Liu, C., Ye, D., Shang, Y., Jiang, S., Li, S. et al. (2020). Defend against adversarial samples by using perceptual hash. Computers, Materials & Continua, 62(3), 1365-1386. https://doi.org/10.32604/cmc.2020.07421
Vancouver Style
Liu C, Ye D, Shang Y, Jiang S, Li S, Mei Y, et al. Defend against adversarial samples by using perceptual hash. Comput Mater Contin. 2020;62(3):1365-1386 https://doi.org/10.32604/cmc.2020.07421
IEEE Style
C. Liu et al., “Defend Against Adversarial Samples by Using Perceptual Hash,” Comput. Mater. Contin., vol. 62, no. 3, pp. 1365-1386, 2020. https://doi.org/10.32604/cmc.2020.07421

Citations




cc Copyright © 2020 The Author(s). Published by Tech Science Press.
This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • 5577

    View

  • 3008

    Download

  • 0

    Like

Related articles

Share Link