Open Access
ARTICLE
A Two Stream Fusion Assisted Deep Learning Framework for Stomach Diseases Classification
1 Department of Computer Science, COMSATS University Islamabad, Wah Campus, Pakistan
2 Department of Information Sciences University of Education, Lahore (Multan Campus), Pakistan
3 Department of Computer Science, HITEC University Taxila, Pakistan
4 College of Computer Engineering and Sciences, Prince Sattam bin Abdulaziz University, Al-Kharj, Saudi Arabia
5 Department of Computer Science, Hanyang University, Seoul, 04763, Korea
6 Center for Computational Social Science, Hanyang University, Seoul, 04763, Korea
* Corresponding Author: Byoungchol Chang. Email:
Computers, Materials & Continua 2022, 73(2), 4423-4439. https://doi.org/10.32604/cmc.2022.030432
Received 25 March 2022; Accepted 19 May 2022; Issue published 16 June 2022
Abstract
Due to rapid development in Artificial Intelligence (AI) and Deep Learning (DL), it is difficult to maintain the security and robustness of these techniques and algorithms due to emergence of novel term adversary sampling. Such technique is sensitive to these models. Thus, fake samples cause AI and DL model to produce diverse results. Adversarial attacks that successfully implemented in real world scenarios highlight their applicability even further. In this regard, minor modifications of input images cause “Adversarial Attacks” that altered the performance of competing attacks dramatically. Recently, such attacks and defensive strategies are gaining lot of attention by the machine learning and security researchers. Doctors use different kinds of technologies to examine the patient abnormalities including Wireless Capsule Endoscopy (WCE). However, using WCE it is very difficult for doctors to detect an abnormality within images since it takes enough time while inspection and deciding abnormality. As a result, it took weeks to generate patients test report, which is tiring and strenuous for them. Therefore, researchers come out with the solution to adopt computerized technologies, which are more suitable for the classification and detection of such abnormalities. As far as the classification is concern, the adversarial attacks generate problems in classified images. Now days, to handle this issue machine learning is mainstream defensive approach against adversarial attacks. Hence, this research exposes the attacks by altering the datasets with noise including salt and pepper and Fast Gradient Sign Method (FGSM) and then reflects that how machine learning algorithms work fine to handle these noises in order to avoid attacks. Results obtained on the WCE images which are vulnerable to adversarial attack are 96.30% accurate and prove that the proposed defensive model is robust when compared to competitive existing methods.Keywords
Cite This Article
This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.