Open Access iconOpen Access

ARTICLE

crossmark

An Adversarial Network-based Multi-model Black-box Attack

Bin Lin1, Jixin Chen2, Zhihong Zhang3, Yanlin Lai2, Xinlong Wu2, Lulu Tian4, Wangchi Cheng5,*

1 Sichuan Normal University, Chengdu, 610066, China
2 School of Computer Science, Southwest Petroleum University, Chengdu, 610500, China
3 AECC Sichuan Gas Turbine Establishment, Mianyang, 621700, China
4 Brunel University London, Uxbridge, Middlesex, UB83PH, United Kingdom
5 Institute of Logistics Science and Technology, Beijing, 100166, China

* Corresponding Author: Wangchi Cheng. Email: email

Intelligent Automation & Soft Computing 2021, 30(2), 641-649. https://doi.org/10.32604/iasc.2021.016818

Abstract

Researches have shown that Deep neural networks (DNNs) are vulnerable to adversarial examples. In this paper, we propose a generative model to explore how to produce adversarial examples that can deceive multiple deep learning models simultaneously. Unlike most of popular adversarial attack algorithms, the one proposed in this paper is based on the Generative Adversarial Networks (GAN). It can quickly produce adversarial examples and perform black-box attacks on multi-model. To enhance the transferability of the samples generated by our approach, we use multiple neural networks in the training process. Experimental results on MNIST showed that our method can efficiently generate adversarial examples. Moreover, it can successfully attack various classes of deep neural networks at the same time, such as fully connected neural networks (FCNN), convolutional neural networks (CNN) and recurrent neural networks (RNN). We performed a black-box attack on VGG16 and the experimental results showed that when the test data classes are ten (0–9), the attack success rate is 97.68%, and when the test data classes are seven (0–6), the attack success rate is up to 98.25%.

Keywords


Cite This Article

B. Lin, J. Chen, Z. Zhang, Y. Lai, X. Wu et al., "An adversarial network-based multi-model black-box attack," Intelligent Automation & Soft Computing, vol. 30, no.2, pp. 641–649, 2021.



cc This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • 1519

    View

  • 779

    Download

  • 1

    Like

Share Link