Muhammad Shahid Amin1, Jamal Hussain Shah1, Mussarat Yasmin1, Ghulam Jillani Ansari2, Muhamamd Attique Khan3, Usman Tariq4, Ye Jin Kim5, Byoungchol Chang6,*
CMC-Computers, Materials & Continua, Vol.73, No.2, pp. 4423-4439, 2022, DOI:10.32604/cmc.2022.030432
- 16 June 2022
Abstract Due to rapid development in Artificial Intelligence (AI) and Deep Learning (DL), it is difficult to maintain the security and robustness of these techniques and algorithms due to emergence of novel term adversary sampling. Such technique is sensitive to these models. Thus, fake samples cause AI and DL model to produce diverse results. Adversarial attacks that successfully implemented in real world scenarios highlight their applicability even further. In this regard, minor modifications of input images cause “Adversarial Attacks” that altered the performance of competing attacks dramatically. Recently, such attacks and defensive strategies are gaining lot… More >