Open Access
ARTICLE
Deep Image Restoration Model: A Defense Method Against Adversarial Attacks
1 Department of Information Technology, University of Central Punjab, Lahore, 54000, Pakistan
2 Department of Communication Technology and Network, Faculty of Computer Science and Information Technology, University Putra Malaysia, Salengor, 43400, Malaysia
3 Department of Computer Science, Comsats University Islamabad, Lahore Campus, 54000, Pakistan
4 Othman Yeop Abdullah Graduate School of Business, University Utara Malaysia, Kuala Lumpur, 50300, Malaysia
* Corresponding Author: Kazim Ali. Email:
Computers, Materials & Continua 2022, 71(2), 2209-2224. https://doi.org/10.32604/cmc.2022.020111
Received 10 May 2021; Accepted 27 July 2021; Issue published 07 December 2021
Abstract
These days, deep learning and computer vision are much-growing fields in this modern world of information technology. Deep learning algorithms and computer vision have achieved great success in different applications like image classification, speech recognition, self-driving vehicles, disease diagnostics, and many more. Despite success in various applications, it is found that these learning algorithms face severe threats due to adversarial attacks. Adversarial examples are inputs like images in the computer vision field, which are intentionally slightly changed or perturbed. These changes are humanly imperceptible. But are misclassified by a model with high probability and severely affects the performance or prediction. In this scenario, we present a deep image restoration model that restores adversarial examples so that the target model is classified correctly again. We proved that our defense method against adversarial attacks based on a deep image restoration model is simple and state-of-the-art by providing strong experimental results evidence. We have used MNIST and CIFAR10 datasets for experiments and analysis of our defense method. In the end, we have compared our method to other state-of-the-art defense methods and proved that our results are better than other rival methods.Keywords
Cite This Article
Citations
This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.