Home / Advanced Search

  • Title/Keywords

  • Author/Affliations

  • Journal

  • Article Type

  • Start Year

  • End Year

Update SearchingClear
  • Articles
  • Online
Search Results (29)
  • Open Access

    REVIEW

    The Transparency Revolution in Geohazard Science: A Systematic Review and Research Roadmap for Explainable Artificial Intelligence

    Moein Tosan1,*, Vahid Nourani2,3, Ozgur Kisi4,5,6, Yongqiang Zhang7, Sameh A. Kantoush8, Mekonnen Gebremichael9, Ruhollah Taghizadeh-Mehrjardi10, Jinhui Jeanne Huang11

    CMES-Computer Modeling in Engineering & Sciences, Vol.146, No.1, 2026, DOI:10.32604/cmes.2025.074768 - 29 January 2026

    Abstract The integration of machine learning (ML) into geohazard assessment has successfully instigated a paradigm shift, leading to the production of models that possess a level of predictive accuracy previously considered unattainable. However, the black-box nature of these systems presents a significant barrier, hindering their operational adoption, regulatory approval, and full scientific validation. This paper provides a systematic review and synthesis of the emerging field of explainable artificial intelligence (XAI) as applied to geohazard science (GeoXAI), a domain that aims to resolve the long-standing trade-off between model performance and interpretability. A rigorous synthesis of 87 foundational… More >

  • Open Access

    ARTICLE

    Explainable Ensemble Learning Framework for Early Detection of Autism Spectrum Disorder: Enhancing Trust, Interpretability and Reliability in AI-Driven Healthcare

    Menwa Alshammeri1,2,*, Noshina Tariq3, NZ Jhanji4,5, Mamoona Humayun6, Muhammad Attique Khan7

    CMES-Computer Modeling in Engineering & Sciences, Vol.146, No.1, 2026, DOI:10.32604/cmes.2025.074627 - 29 January 2026

    Abstract Artificial Intelligence (AI) is changing healthcare by helping with diagnosis. However, for doctors to trust AI tools, they need to be both accurate and easy to understand. In this study, we created a new machine learning system for the early detection of Autism Spectrum Disorder (ASD) in children. Our main goal was to build a model that is not only good at predicting ASD but also clear in its reasoning. For this, we combined several different models, including Random Forest, XGBoost, and Neural Networks, into a single, more powerful framework. We used two different types More >

  • Open Access

    REVIEW

    Learning from Scarcity: A Review of Deep Learning Strategies for Cold-Start Energy Time-Series Forecasting

    Jihoon Moon*

    CMES-Computer Modeling in Engineering & Sciences, Vol.146, No.1, 2026, DOI:10.32604/cmes.2025.071052 - 29 January 2026

    Abstract Predicting the behavior of renewable energy systems requires models capable of generating accurate forecasts from limited historical data, a challenge that becomes especially pronounced when commissioning new facilities where operational records are scarce. This review aims to synthesize recent progress in data-efficient deep learning approaches for addressing such “cold-start” forecasting problems. It primarily covers three interrelated domains—solar photovoltaic (PV), wind power, and electrical load forecasting—where data scarcity and operational variability are most critical, while also including representative studies on hydropower and carbon emission prediction to provide a broader systems perspective. To this end, we examined… More >

  • Open Access

    ARTICLE

    Enhancing Anomaly Detection with Causal Reasoning and Semantic Guidance

    Weishan Gao1,2, Ye Wang1,2, Xiaoyin Wang1,2, Xiaochuan Jing1,2,*

    CMC-Computers, Materials & Continua, Vol.86, No.3, 2026, DOI:10.32604/cmc.2025.073850 - 12 January 2026

    Abstract In the field of intelligent surveillance, weakly supervised video anomaly detection (WSVAD) has garnered widespread attention as a key technology that identifies anomalous events using only video-level labels. Although multiple instance learning (MIL) has dominated the WSVAD for a long time, its reliance solely on video-level labels without semantic grounding hinders a fine-grained understanding of visually similar yet semantically distinct events. In addition, insufficient temporal modeling obscures causal relationships between events, making anomaly decisions reactive rather than reasoning-based. To overcome the limitations above, this paper proposes an adaptive knowledge-based guidance method that integrates external structured… More >

  • Open Access

    ARTICLE

    Building Regulatory Confidence with Human-in-the-Loop AI in Paperless GMP Validation

    Manaliben Amin*

    Journal on Artificial Intelligence, Vol.8, pp. 1-18, 2026, DOI:10.32604/jai.2026.073895 - 07 January 2026

    Abstract Artificial intelligence (AI) is steadily making its way into pharmaceutical validation, where it promises faster documentation, smarter testing strategies, and better handling of deviations. These gains are attractive, but in a regulated environment speed is never enough. Regulators want assurance that every system is reliable, that decisions are explainable, and that human accountability remains central. This paper sets out a Human-in-the-Loop (HITL) AI approach for Computer System Validation (CSV) and Computer Software Assurance (CSA). It relies on explainable AI (XAI) tools but keeps structured human review in place, so automation can be used without creating… More >

  • Open Access

    ARTICLE

    Advances in Machine Learning for Explainable Intrusion Detection Using Imbalance Datasets in Cybersecurity with Harris Hawks Optimization

    Amjad Rehman1,*, Tanzila Saba1, Mona M. Jamjoom2, Shaha Al-Otaibi3, Muhammad I. Khan1

    CMC-Computers, Materials & Continua, Vol.86, No.1, pp. 1-15, 2026, DOI:10.32604/cmc.2025.068958 - 10 November 2025

    Abstract Modern intrusion detection systems (MIDS) face persistent challenges in coping with the rapid evolution of cyber threats, high-volume network traffic, and imbalanced datasets. Traditional models often lack the robustness and explainability required to detect novel and sophisticated attacks effectively. This study introduces an advanced, explainable machine learning framework for multi-class IDS using the KDD99 and IDS datasets, which reflects real-world network behavior through a blend of normal and diverse attack classes. The methodology begins with sophisticated data preprocessing, incorporating both RobustScaler and QuantileTransformer to address outliers and skewed feature distributions, ensuring standardized and model-ready inputs.… More >

  • Open Access

    REVIEW

    Next-Generation Lightweight Explainable AI for Cybersecurity: A Review on Transparency and Real-Time Threat Mitigation

    Khulud Salem Alshudukhi1,*, Sijjad Ali2, Mamoona Humayun3,*, Omar Alruwaili4

    CMES-Computer Modeling in Engineering & Sciences, Vol.145, No.3, pp. 3029-3085, 2025, DOI:10.32604/cmes.2025.073705 - 23 December 2025

    Abstract Problem: The integration of Artificial Intelligence (AI) into cybersecurity, while enhancing threat detection, is hampered by the “black box” nature of complex models, eroding trust, accountability, and regulatory compliance. Explainable AI (XAI) aims to resolve this opacity but introduces a critical new vulnerability: the adversarial exploitation of model explanations themselves. Gap: Current research lacks a comprehensive synthesis of this dual role of XAI in cybersecurity—as both a tool for transparency and a potential attack vector. There is a pressing need to systematically analyze the trade-offs between interpretability and security, evaluate defense mechanisms, and outline a… More >

  • Open Access

    ARTICLE

    An Explainable Deep Learning Framework for Kidney Cancer Classification Using VGG16 and Layer-Wise Relevance Propagation on CT Images

    Asma Batool1, Fahad Ahmed1, Naila Sammar Naz1, Ayman Altameem2, Ateeq Ur Rehman3,4, Khan Muhammad Adnan5,*, Ahmad Almogren6,*

    CMES-Computer Modeling in Engineering & Sciences, Vol.145, No.3, pp. 4129-4152, 2025, DOI:10.32604/cmes.2025.073149 - 23 December 2025

    Abstract Early and accurate cancer diagnosis through medical imaging is crucial for guiding treatment and enhancing patient survival. However, many state-of-the-art deep learning (DL) methods remain opaque and lack clinical interpretability. This paper presents an explainable artificial intelligence (XAI) framework that combines a fine-tuned Visual Geometry Group 16-layer network (VGG16) convolutional neural network with layer-wise relevance propagation (LRP) to deliver high-performance classification and transparent decision support. This approach is evaluated on the publicly available Kaggle kidney cancer imaging dataset, which comprises labeled cancerous and non-cancerous kidney scans. The proposed model achieved 98.75% overall accuracy, with precision, More >

  • Open Access

    ARTICLE

    PPG Based Digital Biomarker for Diabetes Detection with Multiset Spatiotemporal Feature Fusion and XAI

    Mubashir Ali1,2, Jingzhen Li1, Zedong Nie1,*

    CMES-Computer Modeling in Engineering & Sciences, Vol.145, No.3, pp. 4153-4177, 2025, DOI:10.32604/cmes.2025.073048 - 23 December 2025

    Abstract Diabetes imposes a substantial burden on global healthcare systems. Worldwide, nearly half of individuals with diabetes remain undiagnosed, while conventional diagnostic techniques are often invasive, painful, and expensive. In this study, we propose a noninvasive approach for diabetes detection using photoplethysmography (PPG), which is widely integrated into modern wearable devices. First, we derived velocity plethysmography (VPG) and acceleration plethysmography (APG) signals from PPG to construct multi-channel waveform representations. Second, we introduced a novel multiset spatiotemporal feature fusion framework that integrates hand-crafted temporal, statistical, and nonlinear features with recursive feature elimination and deep feature extraction using… More >

  • Open Access

    REVIEW

    A Systematic Review of Multimodal Fusion and Explainable AI Applications in Breast Cancer Diagnosis

    Deema Alzamil1,2,*, Bader Alkhamees2, Mohammad Mehedi Hassan2,3

    CMES-Computer Modeling in Engineering & Sciences, Vol.145, No.3, pp. 2971-3027, 2025, DOI:10.32604/cmes.2025.070867 - 23 December 2025

    Abstract Breast cancer diagnosis relies heavily on many kinds of information from diverse sources—like mammogram images, ultrasound scans, patient records, and genetic tests—but most AI tools look at only one of these at a time, which limits their ability to produce accurate and comprehensive decisions. In recent years, multimodal learning has emerged, enabling the integration of heterogeneous data to improve performance and diagnostic accuracy. However, doctors cannot always see how or why these AI tools make their choices, which is a significant bottleneck in their reliability, along with adoption in clinical settings. Hence, people are adding… More >

Displaying 1-10 on page 1 of 29. Per Page