Home / Advanced Search

  • Title/Keywords

  • Author/Affliations

  • Journal

  • Article Type

  • Start Year

  • End Year

Update SearchingClear
  • Articles
  • Online
Search Results (36)
  • Open Access

    ARTICLE

    GliomaCNN: An Effective Lightweight CNN Model in Assessment of Classifying Brain Tumor from Magnetic Resonance Images Using Explainable AI

    Md. Atiqur Rahman1, Mustavi Ibne Masum1, Khan Md Hasib2, M. F. Mridha3,*, Sultan Alfarhood4, Mejdl Safran4,*, Dunren Che5

    CMES-Computer Modeling in Engineering & Sciences, Vol.140, No.3, pp. 2425-2448, 2024, DOI:10.32604/cmes.2024.050760

    Abstract Brain tumors pose a significant threat to human lives and have gained increasing attention as the tenth leading cause of global mortality. This study addresses the pressing issue of brain tumor classification using Magnetic resonance imaging (MRI). It focuses on distinguishing between Low-Grade Gliomas (LGG) and High-Grade Gliomas (HGG). LGGs are benign and typically manageable with surgical resection, while HGGs are malignant and more aggressive. The research introduces an innovative custom convolutional neural network (CNN) model, Glioma-CNN. GliomaCNN stands out as a lightweight CNN model compared to its predecessors. The research utilized the BraTS 2020 More >

  • Open Access

    ARTICLE

    CrossLinkNet: An Explainable and Trustworthy AI Framework for Whole-Slide Images Segmentation

    Peng Xiao1, Qi Zhong2, Jingxue Chen1, Dongyuan Wu1, Zhen Qin1, Erqiang Zhou1,*

    CMC-Computers, Materials & Continua, Vol.79, No.3, pp. 4703-4724, 2024, DOI:10.32604/cmc.2024.049791

    Abstract In the intelligent medical diagnosis area, Artificial Intelligence (AI)’s trustworthiness, reliability, and interpretability are critical, especially in cancer diagnosis. Traditional neural networks, while excellent at processing natural images, often lack interpretability and adaptability when processing high-resolution digital pathological images. This limitation is particularly evident in pathological diagnosis, which is the gold standard of cancer diagnosis and relies on a pathologist’s careful examination and analysis of digital pathological slides to identify the features and progression of the disease. Therefore, the integration of interpretable AI into smart medical diagnosis is not only an inevitable technological trend but… More >

  • Open Access

    ARTICLE

    A Study on the Explainability of Thyroid Cancer Prediction: SHAP Values and Association-Rule Based Feature Integration Framework

    Sujithra Sankar1,*, S. Sathyalakshmi2

    CMC-Computers, Materials & Continua, Vol.79, No.2, pp. 3111-3138, 2024, DOI:10.32604/cmc.2024.048408

    Abstract In the era of advanced machine learning techniques, the development of accurate predictive models for complex medical conditions, such as thyroid cancer, has shown remarkable progress. Accurate predictive models for thyroid cancer enhance early detection, improve resource allocation, and reduce overtreatment. However, the widespread adoption of these models in clinical practice demands predictive performance along with interpretability and transparency. This paper proposes a novel association-rule based feature-integrated machine learning model which shows better classification and prediction accuracy than present state-of-the-art models. Our study also focuses on the application of SHapley Additive exPlanations (SHAP) values as… More >

  • Open Access

    ARTICLE

    MAIPFE: An Efficient Multimodal Approach Integrating Pre-Emptive Analysis, Personalized Feature Selection, and Explainable AI

    Moshe Dayan Sirapangi1, S. Gopikrishnan1,*

    CMC-Computers, Materials & Continua, Vol.79, No.2, pp. 2229-2251, 2024, DOI:10.32604/cmc.2024.047438

    Abstract Medical Internet of Things (IoT) devices are becoming more and more common in healthcare. This has created a huge need for advanced predictive health modeling strategies that can make good use of the growing amount of multimodal data to find potential health risks early and help individuals in a personalized way. Existing methods, while useful, have limitations in predictive accuracy, delay, personalization, and user interpretability, requiring a more comprehensive and efficient approach to harness modern medical IoT devices. MAIPFE is a multimodal approach integrating pre-emptive analysis, personalized feature selection, and explainable AI for real-time health… More >

  • Open Access

    ARTICLE

    Adaptation of Federated Explainable Artificial Intelligence for Efficient and Secure E-Healthcare Systems

    Rabia Abid1, Muhammad Rizwan2, Abdulatif Alabdulatif3,*, Abdullah Alnajim4, Meznah Alamro5, Mourade Azrour6

    CMC-Computers, Materials & Continua, Vol.78, No.3, pp. 3413-3429, 2024, DOI:10.32604/cmc.2024.046880

    Abstract Explainable Artificial Intelligence (XAI) has an advanced feature to enhance the decision-making feature and improve the rule-based technique by using more advanced Machine Learning (ML) and Deep Learning (DL) based algorithms. In this paper, we chose e-healthcare systems for efficient decision-making and data classification, especially in data security, data handling, diagnostics, laboratories, and decision-making. Federated Machine Learning (FML) is a new and advanced technology that helps to maintain privacy for Personal Health Records (PHR) and handle a large amount of medical data effectively. In this context, XAI, along with FML, increases efficiency and improves the More >

  • Open Access

    ARTICLE

    Transparent and Accurate COVID-19 Diagnosis: Integrating Explainable AI with Advanced Deep Learning in CT Imaging

    Mohammad Mehedi Hassan1,*, Salman A. AlQahtani2, Mabrook S. AlRakhami1, Ahmed Zohier Elhendi3

    CMES-Computer Modeling in Engineering & Sciences, Vol.139, No.3, pp. 3101-3123, 2024, DOI:10.32604/cmes.2024.047940

    Abstract In the current landscape of the COVID-19 pandemic, the utilization of deep learning in medical imaging, especially in chest computed tomography (CT) scan analysis for virus detection, has become increasingly significant. Despite its potential, deep learning’s “black box” nature has been a major impediment to its broader acceptance in clinical environments, where transparency in decision-making is imperative. To bridge this gap, our research integrates Explainable AI (XAI) techniques, specifically the Local Interpretable Model-Agnostic Explanations (LIME) method, with advanced deep learning models. This integration forms a sophisticated and transparent framework for COVID-19 identification, enhancing the capability… More >

  • Open Access

    ARTICLE

    Explainable Conformer Network for Detection of COVID-19 Pneumonia from Chest CT Scan: From Concepts toward Clinical Explainability

    Mohamed Abdel-Basset1, Hossam Hawash1, Mohamed Abouhawwash2,3,*, S. S. Askar4, Alshaimaa A. Tantawy1

    CMC-Computers, Materials & Continua, Vol.78, No.1, pp. 1171-1187, 2024, DOI:10.32604/cmc.2023.044425

    Abstract The early implementation of treatment therapies necessitates the swift and precise identification of COVID-19 pneumonia by the analysis of chest CT scans. This study aims to investigate the indispensable need for precise and interpretable diagnostic tools for improving clinical decision-making for COVID-19 diagnosis. This paper proposes a novel deep learning approach, called Conformer Network, for explainable discrimination of viral pneumonia depending on the lung Region of Infections (ROI) within a single modality radiographic CT scan. Firstly, an efficient U-shaped transformer network is integrated for lung image segmentation. Then, a robust transfer learning technique is introduced… More >

  • Open Access

    ARTICLE

    Explainable Classification Model for Android Malware Analysis Using API and Permission-Based Features

    Nida Aslam1,*, Irfan Ullah Khan2, Salma Abdulrahman Bader2, Aisha Alansari3, Lama Abdullah Alaqeel2, Razan Mohammed Khormy2, Zahra Abdultawab AlKubaish2, Tariq Hussain4,*

    CMC-Computers, Materials & Continua, Vol.76, No.3, pp. 3167-3188, 2023, DOI:10.32604/cmc.2023.039721

    Abstract One of the most widely used smartphone operating systems, Android, is vulnerable to cutting-edge malware that employs sophisticated logic. Such malware attacks could lead to the execution of unauthorized acts on the victims’ devices, stealing personal information and causing hardware damage. In previous studies, machine learning (ML) has shown its efficacy in detecting malware events and classifying their types. However, attackers are continuously developing more sophisticated methods to bypass detection. Therefore, up-to-date datasets must be utilized to implement proactive models for detecting malware events in Android mobile devices. Therefore, this study employed ML algorithms to… More >

  • Open Access

    EDITORIAL

    Grad-CAM: Understanding AI Models

    Shuihua Wang1,2, Yudong Zhang2,*

    CMC-Computers, Materials & Continua, Vol.76, No.2, pp. 1321-1324, 2023, DOI:10.32604/cmc.2023.041419

    Abstract This article has no abstract. More >

  • Open Access

    ARTICLE

    Explainable Artificial Intelligence-Based Model Drift Detection Applicable to Unsupervised Environments

    Yongsoo Lee, Yeeun Lee, Eungyu Lee, Taejin Lee*

    CMC-Computers, Materials & Continua, Vol.76, No.2, pp. 1701-1719, 2023, DOI:10.32604/cmc.2023.040235

    Abstract Cybersecurity increasingly relies on machine learning (ML) models to respond to and detect attacks. However, the rapidly changing data environment makes model life-cycle management after deployment essential. Real-time detection of drift signals from various threats is fundamental for effectively managing deployed models. However, detecting drift in unsupervised environments can be challenging. This study introduces a novel approach leveraging Shapley additive explanations (SHAP), a widely recognized explainability technique in ML, to address drift detection in unsupervised settings. The proposed method incorporates a range of plots and statistical techniques to enhance drift detection reliability and introduces a… More >

Displaying 1-10 on page 1 of 36. Per Page