Biying Deng1, Ziyong Ran1, Jixin Chen1, Desheng Zheng1,*, Qiao Yang2, Lulu Tian3
Intelligent Automation & Soft Computing, Vol.30, No.3, pp. 889-898, 2021, DOI:10.32604/iasc.2021.019727
- 20 August 2021
Abstract In recent years, due to the popularization of deep learning technology, more and more attention has been paid to the security of deep neural networks. A wide variety of machine learning algorithms can attack neural networks and make its classification and judgement of target samples wrong. However, the previous attack algorithms are based on the calculation of the corresponding model to generate unique adversarial examples, and cannot extract attack features and generate corresponding samples in batches. In this paper, Generative Adversarial Networks (GAN) is used to learn the distribution of adversarial examples generated by FGSM More >