Bin Lin1, Jixin Chen2, Zhihong Zhang3, Yanlin Lai2, Xinlong Wu2, Lulu Tian4, Wangchi Cheng5,*
Intelligent Automation & Soft Computing, Vol.30, No.2, pp. 641-649, 2021, DOI:10.32604/iasc.2021.016818
- 11 August 2021
Abstract Researches have shown that Deep neural networks (DNNs) are vulnerable to adversarial examples. In this paper, we propose a generative model to explore how to produce adversarial examples that can deceive multiple deep learning models simultaneously. Unlike most of popular adversarial attack algorithms, the one proposed in this paper is based on the Generative Adversarial Networks (GAN). It can quickly produce adversarial examples and perform black-box attacks on multi-model. To enhance the transferability of the samples generated by our approach, we use multiple neural networks in the training process. Experimental results on MNIST showed that More >