Home / Advanced Search

  • Title/Keywords

  • Author/Affliations

  • Journal

  • Article Type

  • Start Year

  • End Year

Update SearchingClear
  • Articles
  • Online
Search Results (18)
  • Open Access

    ARTICLE

    Explainable Artificial Intelligence (XAI) Model for Cancer Image Classification

    Amit Singhal1, Krishna Kant Agrawal2, Angeles Quezada3, Adrian Rodriguez Aguiñaga4, Samantha Jiménez4, Satya Prakash Yadav5,,*

    CMES-Computer Modeling in Engineering & Sciences, Vol.141, No.1, pp. 401-441, 2024, DOI:10.32604/cmes.2024.051363 - 20 August 2024

    Abstract The use of Explainable Artificial Intelligence (XAI) models becomes increasingly important for making decisions in smart healthcare environments. It is to make sure that decisions are based on trustworthy algorithms and that healthcare workers understand the decisions made by these algorithms. These models can potentially enhance interpretability and explainability in decision-making processes that rely on artificial intelligence. Nevertheless, the intricate nature of the healthcare field necessitates the utilization of sophisticated models to classify cancer images. This research presents an advanced investigation of XAI models to classify cancer images. It describes the different levels of explainability… More >

  • Open Access

    ARTICLE

    MAIPFE: An Efficient Multimodal Approach Integrating Pre-Emptive Analysis, Personalized Feature Selection, and Explainable AI

    Moshe Dayan Sirapangi1, S. Gopikrishnan1,*

    CMC-Computers, Materials & Continua, Vol.79, No.2, pp. 2229-2251, 2024, DOI:10.32604/cmc.2024.047438 - 15 May 2024

    Abstract Medical Internet of Things (IoT) devices are becoming more and more common in healthcare. This has created a huge need for advanced predictive health modeling strategies that can make good use of the growing amount of multimodal data to find potential health risks early and help individuals in a personalized way. Existing methods, while useful, have limitations in predictive accuracy, delay, personalization, and user interpretability, requiring a more comprehensive and efficient approach to harness modern medical IoT devices. MAIPFE is a multimodal approach integrating pre-emptive analysis, personalized feature selection, and explainable AI for real-time health… More >

  • Open Access

    ARTICLE

    Adaptation of Federated Explainable Artificial Intelligence for Efficient and Secure E-Healthcare Systems

    Rabia Abid1, Muhammad Rizwan2, Abdulatif Alabdulatif3,*, Abdullah Alnajim4, Meznah Alamro5, Mourade Azrour6

    CMC-Computers, Materials & Continua, Vol.78, No.3, pp. 3413-3429, 2024, DOI:10.32604/cmc.2024.046880 - 26 March 2024

    Abstract Explainable Artificial Intelligence (XAI) has an advanced feature to enhance the decision-making feature and improve the rule-based technique by using more advanced Machine Learning (ML) and Deep Learning (DL) based algorithms. In this paper, we chose e-healthcare systems for efficient decision-making and data classification, especially in data security, data handling, diagnostics, laboratories, and decision-making. Federated Machine Learning (FML) is a new and advanced technology that helps to maintain privacy for Personal Health Records (PHR) and handle a large amount of medical data effectively. In this context, XAI, along with FML, increases efficiency and improves the More >

  • Open Access

    ARTICLE

    Explainable Classification Model for Android Malware Analysis Using API and Permission-Based Features

    Nida Aslam1,*, Irfan Ullah Khan2, Salma Abdulrahman Bader2, Aisha Alansari3, Lama Abdullah Alaqeel2, Razan Mohammed Khormy2, Zahra Abdultawab AlKubaish2, Tariq Hussain4,*

    CMC-Computers, Materials & Continua, Vol.76, No.3, pp. 3167-3188, 2023, DOI:10.32604/cmc.2023.039721 - 08 October 2023

    Abstract One of the most widely used smartphone operating systems, Android, is vulnerable to cutting-edge malware that employs sophisticated logic. Such malware attacks could lead to the execution of unauthorized acts on the victims’ devices, stealing personal information and causing hardware damage. In previous studies, machine learning (ML) has shown its efficacy in detecting malware events and classifying their types. However, attackers are continuously developing more sophisticated methods to bypass detection. Therefore, up-to-date datasets must be utilized to implement proactive models for detecting malware events in Android mobile devices. Therefore, this study employed ML algorithms to… More >

  • Open Access

    ARTICLE

    Explainable Artificial Intelligence-Based Model Drift Detection Applicable to Unsupervised Environments

    Yongsoo Lee, Yeeun Lee, Eungyu Lee, Taejin Lee*

    CMC-Computers, Materials & Continua, Vol.76, No.2, pp. 1701-1719, 2023, DOI:10.32604/cmc.2023.040235 - 30 August 2023

    Abstract Cybersecurity increasingly relies on machine learning (ML) models to respond to and detect attacks. However, the rapidly changing data environment makes model life-cycle management after deployment essential. Real-time detection of drift signals from various threats is fundamental for effectively managing deployed models. However, detecting drift in unsupervised environments can be challenging. This study introduces a novel approach leveraging Shapley additive explanations (SHAP), a widely recognized explainability technique in ML, to address drift detection in unsupervised settings. The proposed method incorporates a range of plots and statistical techniques to enhance drift detection reliability and introduces a… More >

  • Open Access

    ARTICLE

    XA-GANomaly: An Explainable Adaptive Semi-Supervised Learning Method for Intrusion Detection Using GANomaly

    Yuna Han1, Hangbae Chang2,*

    CMC-Computers, Materials & Continua, Vol.76, No.1, pp. 221-237, 2023, DOI:10.32604/cmc.2023.039463 - 08 June 2023

    Abstract Intrusion detection involves identifying unauthorized network activity and recognizing whether the data constitute an abnormal network transmission. Recent research has focused on using semi-supervised learning mechanisms to identify abnormal network traffic to deal with labeled and unlabeled data in the industry. However, real-time training and classifying network traffic pose challenges, as they can lead to the degradation of the overall dataset and difficulties preventing attacks. Additionally, existing semi-supervised learning research might need to analyze the experimental results comprehensively. This paper proposes XA-GANomaly, a novel technique for explainable adaptive semi-supervised learning using GANomaly, an image anomalous… More >

  • Open Access

    ARTICLE

    Efficient Explanation and Evaluation Methodology Based on Hybrid Feature Dropout

    Jingang Kim, Suengbum Lim, Taejin Lee*

    Computer Systems Science and Engineering, Vol.47, No.1, pp. 471-490, 2023, DOI:10.32604/csse.2023.038413 - 26 May 2023

    Abstract AI-related research is conducted in various ways, but the reliability of AI prediction results is currently insufficient, so expert decisions are indispensable for tasks that require essential decision-making. XAI (eXplainable AI) is studied to improve the reliability of AI. However, each XAI methodology shows different results in the same data set and exact model. This means that XAI results must be given meaning, and a lot of noise value emerges. This paper proposes the HFD (Hybrid Feature Dropout)-based XAI and evaluation methodology. The proposed XAI methodology can mitigate shortcomings, such as incorrect feature weights and… More >

  • Open Access

    ARTICLE

    Blockchain with Explainable Artificial Intelligence Driven Intrusion Detection for Clustered IoT Driven Ubiquitous Computing System

    Reda Salama1, Mahmoud Ragab1,2,*

    Computer Systems Science and Engineering, Vol.46, No.3, pp. 2917-2932, 2023, DOI:10.32604/csse.2023.037016 - 03 April 2023

    Abstract In the Internet of Things (IoT) based system, the multi-level client’s requirements can be fulfilled by incorporating communication technologies with distributed homogeneous networks called ubiquitous computing systems (UCS). The UCS necessitates heterogeneity, management level, and data transmission for distributed users. Simultaneously, security remains a major issue in the IoT-driven UCS. Besides, energy-limited IoT devices need an effective clustering strategy for optimal energy utilization. The recent developments of explainable artificial intelligence (XAI) concepts can be employed to effectively design intrusion detection systems (IDS) for accomplishing security in UCS. In this view, this study designs a novel… More >

  • Open Access

    ARTICLE

    Quantum Inspired Differential Evolution with Explainable Artificial Intelligence-Based COVID-19 Detection

    Abdullah M. Basahel, Mohammad Yamin*

    Computer Systems Science and Engineering, Vol.46, No.1, pp. 209-224, 2023, DOI:10.32604/csse.2023.034449 - 20 January 2023

    Abstract Recent advancements in the Internet of Things (Io), 5G networks, and cloud computing (CC) have led to the development of Human-centric IoT (HIoT) applications that transform human physical monitoring based on machine monitoring. The HIoT systems find use in several applications such as smart cities, healthcare, transportation, etc. Besides, the HIoT system and explainable artificial intelligence (XAI) tools can be deployed in the healthcare sector for effective decision-making. The COVID-19 pandemic has become a global health issue that necessitates automated and effective diagnostic tools to detect the disease at the initial stage. This article presents… More >

  • Open Access

    ARTICLE

    Detecting Deepfake Images Using Deep Learning Techniques and Explainable AI Methods

    Wahidul Hasan Abir1, Faria Rahman Khanam1, Kazi Nabiul Alam1, Myriam Hadjouni2, Hela Elmannai3, Sami Bourouis4, Rajesh Dey5, Mohammad Monirujjaman Khan1,*

    Intelligent Automation & Soft Computing, Vol.35, No.2, pp. 2151-2169, 2023, DOI:10.32604/iasc.2023.029653 - 19 July 2022

    Abstract Nowadays, deepfake is wreaking havoc on society. Deepfake content is created with the help of artificial intelligence and machine learning to replace one person’s likeness with another person in pictures or recorded videos. Although visual media manipulations are not new, the introduction of deepfakes has marked a breakthrough in creating fake media and information. These manipulated pictures and videos will undoubtedly have an enormous societal impact. Deepfake uses the latest technology like Artificial Intelligence (AI), Machine Learning (ML), and Deep Learning (DL) to construct automated methods for creating fake content that is becoming increasingly difficult… More >

Displaying 1-10 on page 1 of 18. Per Page