Home / Advanced Search

  • Title/Keywords

  • Author/Affliations

  • Journal

  • Article Type

  • Start Year

  • End Year

Update SearchingClear
  • Articles
  • Online
Search Results (10)
  • Open Access

    ARTICLE

    Explainable Ensemble Learning Framework for Early Detection of Autism Spectrum Disorder: Enhancing Trust, Interpretability and Reliability in AI-Driven Healthcare

    Menwa Alshammeri1,2,*, Noshina Tariq3, NZ Jhanji4,5, Mamoona Humayun6, Muhammad Attique Khan7

    CMES-Computer Modeling in Engineering & Sciences, Vol.146, No.1, 2026, DOI:10.32604/cmes.2025.074627 - 29 January 2026

    Abstract Artificial Intelligence (AI) is changing healthcare by helping with diagnosis. However, for doctors to trust AI tools, they need to be both accurate and easy to understand. In this study, we created a new machine learning system for the early detection of Autism Spectrum Disorder (ASD) in children. Our main goal was to build a model that is not only good at predicting ASD but also clear in its reasoning. For this, we combined several different models, including Random Forest, XGBoost, and Neural Networks, into a single, more powerful framework. We used two different types More >

  • Open Access

    ARTICLE

    Enhancing Anomaly Detection with Causal Reasoning and Semantic Guidance

    Weishan Gao1,2, Ye Wang1,2, Xiaoyin Wang1,2, Xiaochuan Jing1,2,*

    CMC-Computers, Materials & Continua, Vol.86, No.3, 2026, DOI:10.32604/cmc.2025.073850 - 12 January 2026

    Abstract In the field of intelligent surveillance, weakly supervised video anomaly detection (WSVAD) has garnered widespread attention as a key technology that identifies anomalous events using only video-level labels. Although multiple instance learning (MIL) has dominated the WSVAD for a long time, its reliance solely on video-level labels without semantic grounding hinders a fine-grained understanding of visually similar yet semantically distinct events. In addition, insufficient temporal modeling obscures causal relationships between events, making anomaly decisions reactive rather than reasoning-based. To overcome the limitations above, this paper proposes an adaptive knowledge-based guidance method that integrates external structured… More >

  • Open Access

    ARTICLE

    Building Regulatory Confidence with Human-in-the-Loop AI in Paperless GMP Validation

    Manaliben Amin*

    Journal on Artificial Intelligence, Vol.8, pp. 1-18, 2026, DOI:10.32604/jai.2026.073895 - 07 January 2026

    Abstract Artificial intelligence (AI) is steadily making its way into pharmaceutical validation, where it promises faster documentation, smarter testing strategies, and better handling of deviations. These gains are attractive, but in a regulated environment speed is never enough. Regulators want assurance that every system is reliable, that decisions are explainable, and that human accountability remains central. This paper sets out a Human-in-the-Loop (HITL) AI approach for Computer System Validation (CSV) and Computer Software Assurance (CSA). It relies on explainable AI (XAI) tools but keeps structured human review in place, so automation can be used without creating… More >

  • Open Access

    REVIEW

    Next-Generation Lightweight Explainable AI for Cybersecurity: A Review on Transparency and Real-Time Threat Mitigation

    Khulud Salem Alshudukhi1,*, Sijjad Ali2, Mamoona Humayun3,*, Omar Alruwaili4

    CMES-Computer Modeling in Engineering & Sciences, Vol.145, No.3, pp. 3029-3085, 2025, DOI:10.32604/cmes.2025.073705 - 23 December 2025

    Abstract Problem: The integration of Artificial Intelligence (AI) into cybersecurity, while enhancing threat detection, is hampered by the “black box” nature of complex models, eroding trust, accountability, and regulatory compliance. Explainable AI (XAI) aims to resolve this opacity but introduces a critical new vulnerability: the adversarial exploitation of model explanations themselves. Gap: Current research lacks a comprehensive synthesis of this dual role of XAI in cybersecurity—as both a tool for transparency and a potential attack vector. There is a pressing need to systematically analyze the trade-offs between interpretability and security, evaluate defense mechanisms, and outline a… More >

  • Open Access

    ARTICLE

    PPG Based Digital Biomarker for Diabetes Detection with Multiset Spatiotemporal Feature Fusion and XAI

    Mubashir Ali1,2, Jingzhen Li1, Zedong Nie1,*

    CMES-Computer Modeling in Engineering & Sciences, Vol.145, No.3, pp. 4153-4177, 2025, DOI:10.32604/cmes.2025.073048 - 23 December 2025

    Abstract Diabetes imposes a substantial burden on global healthcare systems. Worldwide, nearly half of individuals with diabetes remain undiagnosed, while conventional diagnostic techniques are often invasive, painful, and expensive. In this study, we propose a noninvasive approach for diabetes detection using photoplethysmography (PPG), which is widely integrated into modern wearable devices. First, we derived velocity plethysmography (VPG) and acceleration plethysmography (APG) signals from PPG to construct multi-channel waveform representations. Second, we introduced a novel multiset spatiotemporal feature fusion framework that integrates hand-crafted temporal, statistical, and nonlinear features with recursive feature elimination and deep feature extraction using… More >

  • Open Access

    REVIEW

    Deep Learning and Federated Learning in Human Activity Recognition with Sensor Data: A Comprehensive Review

    Farhad Mortezapour Shiri*, Thinagaran Perumal, Norwati Mustapha, Raihani Mohamed

    CMES-Computer Modeling in Engineering & Sciences, Vol.145, No.2, pp. 1389-1485, 2025, DOI:10.32604/cmes.2025.071858 - 26 November 2025

    Abstract Human Activity Recognition (HAR) represents a rapidly advancing research domain, propelled by continuous developments in sensor technologies and the Internet of Things (IoT). Deep learning has become the dominant paradigm in sensor-based HAR systems, offering significant advantages over traditional machine learning methods by eliminating manual feature extraction, enhancing recognition accuracy for complex activities, and enabling the exploitation of unlabeled data through generative models. This paper provides a comprehensive review of recent advancements and emerging trends in deep learning models developed for sensor-based human activity recognition (HAR) systems. We begin with an overview of fundamental HAR… More > Graphic Abstract

    Deep Learning and Federated Learning in Human Activity Recognition with Sensor Data: A Comprehensive Review

  • Open Access

    ARTICLE

    Interpretable Vulnerability Detection in LLMs: A BERT-Based Approach with SHAP Explanations

    Nouman Ahmad*, Changsheng Zhang

    CMC-Computers, Materials & Continua, Vol.85, No.2, pp. 3321-3334, 2025, DOI:10.32604/cmc.2025.067044 - 23 September 2025

    Abstract Source code vulnerabilities present significant security threats, necessitating effective detection techniques. Rigid rule-sets and pattern matching are the foundation of traditional static analysis tools, which drown developers in false positives and miss context-sensitive vulnerabilities. Large Language Models (LLMs) like BERT, in particular, are examples of artificial intelligence (AI) that exhibit promise but frequently lack transparency. In order to overcome the issues with model interpretability, this work suggests a BERT-based LLM strategy for vulnerability detection that incorporates Explainable AI (XAI) methods like SHAP and attention heatmaps. Furthermore, to ensure auditable and comprehensible choices, we present a… More >

  • Open Access

    ARTICLE

    Robust False Data Injection Identification Framework for Power Systems Using Explainable Deep Learning

    Ghadah Aldehim, Shakila Basheer, Ala Saleh Alluhaidan, Sapiah Sakri*

    CMC-Computers, Materials & Continua, Vol.85, No.2, pp. 3599-3619, 2025, DOI:10.32604/cmc.2025.065643 - 23 September 2025

    Abstract Although digital changes in power systems have added more ways to monitor and control them, these changes have also led to new cyber-attack risks, mainly from False Data Injection (FDI) attacks. If this happens, the sensors and operations are compromised, which can lead to big problems, disruptions, failures and blackouts. In response to this challenge, this paper presents a reliable and innovative detection framework that leverages Bidirectional Long Short-Term Memory (Bi-LSTM) networks and employs explanatory methods from Artificial Intelligence (AI). Not only does the suggested architecture detect potential fraud with high accuracy, but it also… More >

  • Open Access

    ARTICLE

    An Efficient Explainable AI Model for Accurate Brain Tumor Detection Using MRI Images

    Fatma M. Talaat1,2,*, Mohamed Salem1, Mohamed Shehata3,4,*, Warda M. Shaban5

    CMES-Computer Modeling in Engineering & Sciences, Vol.144, No.2, pp. 2325-2358, 2025, DOI:10.32604/cmes.2025.067195 - 31 August 2025

    Abstract The diagnosis of brain tumors is an extended process that significantly depends on the expertise and skills of radiologists. The rise in patient numbers has substantially elevated the data processing volume, making conventional methods both costly and inefficient. Recently, Artificial Intelligence (AI) has gained prominence for developing automated systems that can accurately diagnose or segment brain tumors in a shorter time frame. Many researchers have examined various algorithms that provide both speed and accuracy in detecting and classifying brain tumors. This paper proposes a new model based on AI, called the Brain Tumor Detection (BTD)… More >

  • Open Access

    ARTICLE

    An AI-Enabled Framework for Transparency and Interpretability in Cardiovascular Disease Risk Prediction

    Isha Kiran1, Shahzad Ali2,3, Sajawal ur Rehman Khan4,5, Musaed Alhussein6, Sheraz Aslam7,8,*, Khursheed Aurangzeb6,*

    CMC-Computers, Materials & Continua, Vol.82, No.3, pp. 5057-5078, 2025, DOI:10.32604/cmc.2025.058724 - 06 March 2025

    Abstract Cardiovascular disease (CVD) remains a leading global health challenge due to its high mortality rate and the complexity of early diagnosis, driven by risk factors such as hypertension, high cholesterol, and irregular pulse rates. Traditional diagnostic methods often struggle with the nuanced interplay of these risk factors, making early detection difficult. In this research, we propose a novel artificial intelligence-enabled (AI-enabled) framework for CVD risk prediction that integrates machine learning (ML) with eXplainable AI (XAI) to provide both high-accuracy predictions and transparent, interpretable insights. Compared to existing studies that typically focus on either optimizing ML… More >

Displaying 1-10 on page 1 of 10. Per Page