Home / Advanced Search

  • Title/Keywords

  • Author/Affliations

  • Journal

  • Article Type

  • Start Year

  • End Year

Update SearchingClear
  • Articles
  • Online
Search Results (5)
  • Open Access

    ARTICLE

    Explainable Artificial Intelligence (XAI) Model for Cancer Image Classification

    Amit Singhal1, Krishna Kant Agrawal2, Angeles Quezada3, Adrian Rodriguez Aguiñaga4, Samantha Jiménez4, Satya Prakash Yadav5,,*

    CMES-Computer Modeling in Engineering & Sciences, Vol.141, No.1, pp. 401-441, 2024, DOI:10.32604/cmes.2024.051363 - 20 August 2024

    Abstract The use of Explainable Artificial Intelligence (XAI) models becomes increasingly important for making decisions in smart healthcare environments. It is to make sure that decisions are based on trustworthy algorithms and that healthcare workers understand the decisions made by these algorithms. These models can potentially enhance interpretability and explainability in decision-making processes that rely on artificial intelligence. Nevertheless, the intricate nature of the healthcare field necessitates the utilization of sophisticated models to classify cancer images. This research presents an advanced investigation of XAI models to classify cancer images. It describes the different levels of explainability… More >

  • Open Access

    ARTICLE

    Contemporary Study for Detection of COVID-19 Using Machine Learning with Explainable AI

    Saad Akbar1,2, Humera Azam1, Sulaiman Sulmi Almutairi3,*, Omar Alqahtani4, Habib Shah4, Aliya Aleryani4

    CMC-Computers, Materials & Continua, Vol.80, No.1, pp. 1075-1104, 2024, DOI:10.32604/cmc.2024.050913 - 18 July 2024

    Abstract The prompt spread of COVID-19 has emphasized the necessity for effective and precise diagnostic tools. In this article, a hybrid approach in terms of datasets as well as the methodology by utilizing a previously unexplored dataset obtained from a private hospital for detecting COVID-19, pneumonia, and normal conditions in chest X-ray images (CXIs) is proposed coupled with Explainable Artificial Intelligence (XAI). Our study leverages less preprocessing with pre-trained cutting-edge models like InceptionV3, VGG16, and VGG19 that excel in the task of feature extraction. The methodology is further enhanced by the inclusion of the t-SNE (t-Distributed… More >

  • Open Access

    ARTICLE

    XA-GANomaly: An Explainable Adaptive Semi-Supervised Learning Method for Intrusion Detection Using GANomaly

    Yuna Han1, Hangbae Chang2,*

    CMC-Computers, Materials & Continua, Vol.76, No.1, pp. 221-237, 2023, DOI:10.32604/cmc.2023.039463 - 08 June 2023

    Abstract Intrusion detection involves identifying unauthorized network activity and recognizing whether the data constitute an abnormal network transmission. Recent research has focused on using semi-supervised learning mechanisms to identify abnormal network traffic to deal with labeled and unlabeled data in the industry. However, real-time training and classifying network traffic pose challenges, as they can lead to the degradation of the overall dataset and difficulties preventing attacks. Additionally, existing semi-supervised learning research might need to analyze the experimental results comprehensively. This paper proposes XA-GANomaly, a novel technique for explainable adaptive semi-supervised learning using GANomaly, an image anomalous… More >

  • Open Access

    ARTICLE

    Detecting Deepfake Images Using Deep Learning Techniques and Explainable AI Methods

    Wahidul Hasan Abir1, Faria Rahman Khanam1, Kazi Nabiul Alam1, Myriam Hadjouni2, Hela Elmannai3, Sami Bourouis4, Rajesh Dey5, Mohammad Monirujjaman Khan1,*

    Intelligent Automation & Soft Computing, Vol.35, No.2, pp. 2151-2169, 2023, DOI:10.32604/iasc.2023.029653 - 19 July 2022

    Abstract Nowadays, deepfake is wreaking havoc on society. Deepfake content is created with the help of artificial intelligence and machine learning to replace one person’s likeness with another person in pictures or recorded videos. Although visual media manipulations are not new, the introduction of deepfakes has marked a breakthrough in creating fake media and information. These manipulated pictures and videos will undoubtedly have an enormous societal impact. Deepfake uses the latest technology like Artificial Intelligence (AI), Machine Learning (ML), and Deep Learning (DL) to construct automated methods for creating fake content that is becoming increasingly difficult… More >

  • Open Access

    REVIEW

    Explainable Artificial Intelligence–A New Step towards the Trust in Medical Diagnosis with AI Frameworks: A Review

    Nilkanth Mukund Deshpande1,2, Shilpa Gite6,7,*, Biswajeet Pradhan3,4,5, Mazen Ebraheem Assiri4

    CMES-Computer Modeling in Engineering & Sciences, Vol.133, No.3, pp. 843-872, 2022, DOI:10.32604/cmes.2022.021225 - 03 August 2022

    Abstract Machine learning (ML) has emerged as a critical enabling tool in the sciences and industry in recent years. Today’s machine learning algorithms can achieve outstanding performance on an expanding variety of complex tasks–thanks to advancements in technique, the availability of enormous databases, and improved computing power. Deep learning models are at the forefront of this advancement. However, because of their nested nonlinear structure, these strong models are termed as “black boxes,” as they provide no information about how they arrive at their conclusions. Such a lack of transparencies may be unacceptable in many applications, such… More >

Displaying 1-10 on page 1 of 5. Per Page