Home / Advanced Search

  • Title/Keywords

  • Author/Affliations

  • Journal

  • Article Type

  • Start Year

  • End Year

Update SearchingClear
  • Articles
  • Online
Search Results (12)
  • Open Access

    ARTICLE

    GliomaCNN: An Effective Lightweight CNN Model in Assessment of Classifying Brain Tumor from Magnetic Resonance Images Using Explainable AI

    Md. Atiqur Rahman1, Mustavi Ibne Masum1, Khan Md Hasib2, M. F. Mridha3,*, Sultan Alfarhood4, Mejdl Safran4,*, Dunren Che5

    CMES-Computer Modeling in Engineering & Sciences, Vol.140, No.3, pp. 2425-2448, 2024, DOI:10.32604/cmes.2024.050760

    Abstract Brain tumors pose a significant threat to human lives and have gained increasing attention as the tenth leading cause of global mortality. This study addresses the pressing issue of brain tumor classification using Magnetic resonance imaging (MRI). It focuses on distinguishing between Low-Grade Gliomas (LGG) and High-Grade Gliomas (HGG). LGGs are benign and typically manageable with surgical resection, while HGGs are malignant and more aggressive. The research introduces an innovative custom convolutional neural network (CNN) model, Glioma-CNN. GliomaCNN stands out as a lightweight CNN model compared to its predecessors. The research utilized the BraTS 2020 More >

  • Open Access

    ARTICLE

    CrossLinkNet: An Explainable and Trustworthy AI Framework for Whole-Slide Images Segmentation

    Peng Xiao1, Qi Zhong2, Jingxue Chen1, Dongyuan Wu1, Zhen Qin1, Erqiang Zhou1,*

    CMC-Computers, Materials & Continua, Vol.79, No.3, pp. 4703-4724, 2024, DOI:10.32604/cmc.2024.049791

    Abstract In the intelligent medical diagnosis area, Artificial Intelligence (AI)’s trustworthiness, reliability, and interpretability are critical, especially in cancer diagnosis. Traditional neural networks, while excellent at processing natural images, often lack interpretability and adaptability when processing high-resolution digital pathological images. This limitation is particularly evident in pathological diagnosis, which is the gold standard of cancer diagnosis and relies on a pathologist’s careful examination and analysis of digital pathological slides to identify the features and progression of the disease. Therefore, the integration of interpretable AI into smart medical diagnosis is not only an inevitable technological trend but… More >

  • Open Access

    ARTICLE

    A Study on the Explainability of Thyroid Cancer Prediction: SHAP Values and Association-Rule Based Feature Integration Framework

    Sujithra Sankar1,*, S. Sathyalakshmi2

    CMC-Computers, Materials & Continua, Vol.79, No.2, pp. 3111-3138, 2024, DOI:10.32604/cmc.2024.048408

    Abstract In the era of advanced machine learning techniques, the development of accurate predictive models for complex medical conditions, such as thyroid cancer, has shown remarkable progress. Accurate predictive models for thyroid cancer enhance early detection, improve resource allocation, and reduce overtreatment. However, the widespread adoption of these models in clinical practice demands predictive performance along with interpretability and transparency. This paper proposes a novel association-rule based feature-integrated machine learning model which shows better classification and prediction accuracy than present state-of-the-art models. Our study also focuses on the application of SHapley Additive exPlanations (SHAP) values as… More >

  • Open Access

    ARTICLE

    MAIPFE: An Efficient Multimodal Approach Integrating Pre-Emptive Analysis, Personalized Feature Selection, and Explainable AI

    Moshe Dayan Sirapangi1, S. Gopikrishnan1,*

    CMC-Computers, Materials & Continua, Vol.79, No.2, pp. 2229-2251, 2024, DOI:10.32604/cmc.2024.047438

    Abstract Medical Internet of Things (IoT) devices are becoming more and more common in healthcare. This has created a huge need for advanced predictive health modeling strategies that can make good use of the growing amount of multimodal data to find potential health risks early and help individuals in a personalized way. Existing methods, while useful, have limitations in predictive accuracy, delay, personalization, and user interpretability, requiring a more comprehensive and efficient approach to harness modern medical IoT devices. MAIPFE is a multimodal approach integrating pre-emptive analysis, personalized feature selection, and explainable AI for real-time health… More >

  • Open Access

    ARTICLE

    Transparent and Accurate COVID-19 Diagnosis: Integrating Explainable AI with Advanced Deep Learning in CT Imaging

    Mohammad Mehedi Hassan1,*, Salman A. AlQahtani2, Mabrook S. AlRakhami1, Ahmed Zohier Elhendi3

    CMES-Computer Modeling in Engineering & Sciences, Vol.139, No.3, pp. 3101-3123, 2024, DOI:10.32604/cmes.2024.047940

    Abstract In the current landscape of the COVID-19 pandemic, the utilization of deep learning in medical imaging, especially in chest computed tomography (CT) scan analysis for virus detection, has become increasingly significant. Despite its potential, deep learning’s “black box” nature has been a major impediment to its broader acceptance in clinical environments, where transparency in decision-making is imperative. To bridge this gap, our research integrates Explainable AI (XAI) techniques, specifically the Local Interpretable Model-Agnostic Explanations (LIME) method, with advanced deep learning models. This integration forms a sophisticated and transparent framework for COVID-19 identification, enhancing the capability… More >

  • Open Access

    ARTICLE

    Explainable AI and Interpretable Model for Insurance Premium Prediction

    Umar Abdulkadir Isa*, Anil Fernando*

    Journal on Artificial Intelligence, Vol.5, pp. 31-42, 2023, DOI:10.32604/jai.2023.040213

    Abstract Traditional machine learning metrics (TMLMs) are quite useful for the current research work precision, recall, accuracy, MSE and RMSE. Not enough for a practitioner to be confident about the performance and dependability of innovative interpretable model 85%–92%. We included in the prediction process, machine learning models (MLMs) with greater than 99% accuracy with a sensitivity of 95%–98% and specifically in the database. We need to explain the model to domain specialists through the MLMs. Human-understandable explanations in addition to ML professionals must establish trust in the prediction of our model. This is achieved by creating… More >

  • Open Access

    ARTICLE

    Implementation of Rapid Code Transformation Process Using Deep Learning Approaches

    Bao Rong Chang1, Hsiu-Fen Tsai2,*, Han-Lin Chou1

    CMES-Computer Modeling in Engineering & Sciences, Vol.136, No.1, pp. 107-134, 2023, DOI:10.32604/cmes.2023.024018

    Abstract Our previous work has introduced the newly generated program using the code transformation model GPT-2, verifying the generated programming codes through simhash (SH) and longest common subsequence (LCS) algorithms. However, the entire code transformation process has encountered a time-consuming problem. Therefore, the objective of this study is to speed up the code transformation process significantly. This paper has proposed deep learning approaches for modifying SH using a variational simhash (VSH) algorithm and replacing LCS with a piecewise longest common subsequence (PLCS) algorithm to faster the verification process in the test phase. Besides the code transformation More > Graphic Abstract

    Implementation of Rapid Code Transformation Process Using Deep Learning Approaches

  • Open Access

    ARTICLE

    Explainable Anomaly Detection Using Vision Transformer Based SVDD

    Ji-Won Baek1, Kyungyong Chung2,*

    CMC-Computers, Materials & Continua, Vol.74, No.3, pp. 6573-6586, 2023, DOI:10.32604/cmc.2023.035246

    Abstract Explainable AI extracts a variety of patterns of data in the learning process and draws hidden information through the discovery of semantic relationships. It is possible to offer the explainable basis of decision-making for inference results. Through the causality of risk factors that have an ambiguous association in big medical data, it is possible to increase transparency and reliability of explainable decision-making that helps to diagnose disease status. In addition, the technique makes it possible to accurately predict disease risk for anomaly detection. Vision transformer for anomaly detection from image data makes classification through MLP.… More >

  • Open Access

    ARTICLE

    Detecting Deepfake Images Using Deep Learning Techniques and Explainable AI Methods

    Wahidul Hasan Abir1, Faria Rahman Khanam1, Kazi Nabiul Alam1, Myriam Hadjouni2, Hela Elmannai3, Sami Bourouis4, Rajesh Dey5, Mohammad Monirujjaman Khan1,*

    Intelligent Automation & Soft Computing, Vol.35, No.2, pp. 2151-2169, 2023, DOI:10.32604/iasc.2023.029653

    Abstract Nowadays, deepfake is wreaking havoc on society. Deepfake content is created with the help of artificial intelligence and machine learning to replace one person’s likeness with another person in pictures or recorded videos. Although visual media manipulations are not new, the introduction of deepfakes has marked a breakthrough in creating fake media and information. These manipulated pictures and videos will undoubtedly have an enormous societal impact. Deepfake uses the latest technology like Artificial Intelligence (AI), Machine Learning (ML), and Deep Learning (DL) to construct automated methods for creating fake content that is becoming increasingly difficult… More >

  • Open Access

    ARTICLE

    Explainable AI Enabled Infant Mortality Prediction Based on Neonatal Sepsis

    Priti Shaw1, Kaustubh Pachpor2, Suresh Sankaranarayanan3,*

    Computer Systems Science and Engineering, Vol.44, No.1, pp. 311-325, 2023, DOI:10.32604/csse.2023.025281

    Abstract Neonatal sepsis is the third most common cause of neonatal mortality and a serious public health problem, especially in developing countries. There have been researches on human sepsis, vaccine response, and immunity. Also, machine learning methodologies were used for predicting infant mortality based on certain features like age, birth weight, gestational weeks, and Appearance, Pulse, Grimace, Activity and Respiration (APGAR) score. Sepsis, which is considered the most determining condition towards infant mortality, has never been considered for mortality prediction. So, we have deployed a deep neural model which is the state of art and performed More >

Displaying 1-10 on page 1 of 12. Per Page