Home / Advanced Search

  • Title/Keywords

  • Author/Affliations

  • Journal

  • Article Type

  • Start Year

  • End Year

Update SearchingClear
  • Articles
  • Online
Search Results (51)
  • Open Access

    ARTICLE

    Explainable Ensemble Learning Framework for Early Detection of Autism Spectrum Disorder: Enhancing Trust, Interpretability and Reliability in AI-Driven Healthcare

    Menwa Alshammeri1,2,*, Noshina Tariq3, NZ Jhanji4,5, Mamoona Humayun6, Muhammad Attique Khan7

    CMES-Computer Modeling in Engineering & Sciences, Vol.146, No.1, 2026, DOI:10.32604/cmes.2025.074627 - 29 January 2026

    Abstract Artificial Intelligence (AI) is changing healthcare by helping with diagnosis. However, for doctors to trust AI tools, they need to be both accurate and easy to understand. In this study, we created a new machine learning system for the early detection of Autism Spectrum Disorder (ASD) in children. Our main goal was to build a model that is not only good at predicting ASD but also clear in its reasoning. For this, we combined several different models, including Random Forest, XGBoost, and Neural Networks, into a single, more powerful framework. We used two different types More >

  • Open Access

    ARTICLE

    CardioForest: An Explainable Ensemble Learning Model for Automatic Wide QRS Complex Tachycardia Diagnosis from ECG

    Vaskar Chakma1,#, Xiaolin Ju1,#, Heling Cao2, Xue Feng3, Xiaodong Ji3, Haiyan Pan3,*, Gao Zhan1,*

    Journal of Intelligent Medicine and Healthcare, Vol.4, pp. 37-86, 2026, DOI:10.32604/jimh.2026.075201 - 23 January 2026

    Abstract Wide QRS Complex Tachycardia (WCT) is a life-threatening cardiac arrhythmia requiring rapid and accurate diagnosis. Traditional manual ECG interpretation is time-consuming and subject to inter-observer variability, while existing AI models often lack the clinical interpretability necessary for trusted deployment in emergency settings. We developed CardioForest, an optimized Random Forest ensemble model, for automated WCT detection from 12-lead ECG signals. The model was trained, tested, and validated using 10-fold cross-validation on 800,000 ten-second-long 12-lead Electrocardiogram (ECG) recordings from the MIMIC-IV dataset (15.46% WCT prevalence), with comparative evaluation against XGBoost, LightGBM, and Gradient Boosting models. Performance was… More >

  • Open Access

    ARTICLE

    Enhanced COVID-19 and Viral Pneumonia Classification Using Customized EfficientNet-B0: A Comparative Analysis with VGG16 and ResNet50

    Williams Kyei*, Chunyong Yin, Kelvin Amos Nicodemas, Khagendra Darlami

    Journal on Artificial Intelligence, Vol.8, pp. 19-38, 2026, DOI:10.32604/jai.2026.074988 - 20 January 2026

    Abstract The COVID-19 pandemic has underscored the need for rapid and accurate diagnostic tools to differentiate respiratory infections from normal cases using chest X-rays (CXRs). Manual interpretation of CXRs is time-consuming and prone to errors, particularly in distinguishing COVID-19 from viral pneumonia. This research addresses these challenges by proposing a customized EfficientNet-B0 model for ternary classification (COVID-19, Viral Pneumonia, Normal) on the COVID-19 Radiography Database. Employing transfer learning with architectural modifications, including a tailored classification head and regularization techniques, the model achieves superior performance. Evaluated via accuracy, F1-score (macro-averaged), AUROC (macro-averaged), precision (macro-averaged), recall (macro-averaged), inference… More >

  • Open Access

    ARTICLE

    Enhancing Anomaly Detection with Causal Reasoning and Semantic Guidance

    Weishan Gao1,2, Ye Wang1,2, Xiaoyin Wang1,2, Xiaochuan Jing1,2,*

    CMC-Computers, Materials & Continua, Vol.86, No.3, 2026, DOI:10.32604/cmc.2025.073850 - 12 January 2026

    Abstract In the field of intelligent surveillance, weakly supervised video anomaly detection (WSVAD) has garnered widespread attention as a key technology that identifies anomalous events using only video-level labels. Although multiple instance learning (MIL) has dominated the WSVAD for a long time, its reliance solely on video-level labels without semantic grounding hinders a fine-grained understanding of visually similar yet semantically distinct events. In addition, insufficient temporal modeling obscures causal relationships between events, making anomaly decisions reactive rather than reasoning-based. To overcome the limitations above, this paper proposes an adaptive knowledge-based guidance method that integrates external structured… More >

  • Open Access

    ARTICLE

    BearFusionNet: A Multi-Stream Attention-Based Deep Learning Framework with Explainable AI for Accurate Detection of Bearing Casting Defects

    Md. Ehsanul Haque1, Md. Nurul Absur2, Fahmid Al Farid3, Md Kamrul Siam4, Jia Uddin5,*, Hezerul Abdul Karim3,*

    CMC-Computers, Materials & Continua, Vol.86, No.3, 2026, DOI:10.32604/cmc.2025.071771 - 12 January 2026

    Abstract Manual inspection of onba earing casting defects is not realistic and unreliable, particularly in the case of some micro-level anomalies which lead to major defects on a large scale. To address these challenges, we propose BearFusionNet, an attention-based deep learning architecture with multi-stream, which merges both DenseNet201 and MobileNetV2 for feature extraction with a classification head inspired by VGG19. This hybrid design, figuratively beaming from one layer to another, extracts the enormity of representations on different scales, backed by a pre-preprocessing pipeline that brings defect saliency to the fore through contrast adjustment, denoising, and edge… More >

  • Open Access

    ARTICLE

    Building Regulatory Confidence with Human-in-the-Loop AI in Paperless GMP Validation

    Manaliben Amin*

    Journal on Artificial Intelligence, Vol.8, pp. 1-18, 2026, DOI:10.32604/jai.2026.073895 - 07 January 2026

    Abstract Artificial intelligence (AI) is steadily making its way into pharmaceutical validation, where it promises faster documentation, smarter testing strategies, and better handling of deviations. These gains are attractive, but in a regulated environment speed is never enough. Regulators want assurance that every system is reliable, that decisions are explainable, and that human accountability remains central. This paper sets out a Human-in-the-Loop (HITL) AI approach for Computer System Validation (CSV) and Computer Software Assurance (CSA). It relies on explainable AI (XAI) tools but keeps structured human review in place, so automation can be used without creating… More >

  • Open Access

    ARTICLE

    Graph-Based Intrusion Detection with Explainable Edge Classification Learning

    Jaeho Shin1, Jaekwang Kim2,*

    CMC-Computers, Materials & Continua, Vol.86, No.1, pp. 1-26, 2026, DOI:10.32604/cmc.2025.068767 - 10 November 2025

    Abstract Network attacks have become a critical issue in the internet security domain. Artificial intelligence technology-based detection methodologies have attracted attention; however, recent studies have struggled to adapt to changing attack patterns and complex network environments. In addition, it is difficult to explain the detection results logically using artificial intelligence. We propose a method for classifying network attacks using graph models to explain the detection results. First, we reconstruct the network packet data into a graphical structure. We then use a graph model to predict network attacks using edge classification. To explain the prediction results, we… More >

  • Open Access

    ARTICLE

    LinguTimeX a Framework for Multilingual CTC Detection Using Explainable AI and Natural Language Processing

    Omar Darwish1, Shorouq Al-Eidi2, Abdallah Al-Shorman1, Majdi Maabreh3, Anas Alsobeh4, Plamen Zahariev5, Yahya Tashtoush6,*

    CMC-Computers, Materials & Continua, Vol.86, No.1, pp. 1-21, 2026, DOI:10.32604/cmc.2025.068266 - 10 November 2025

    Abstract Covert timing channels (CTC) exploit network resources to establish hidden communication pathways, posing significant risks to data security and policy compliance. Therefore, detecting such hidden and dangerous threats remains one of the security challenges. This paper proposes LinguTimeX, a new framework that combines natural language processing with artificial intelligence, along with explainable Artificial Intelligence (AI) not only to detect CTC but also to provide insights into the decision process. LinguTimeX performs multidimensional feature extraction by fusing linguistic attributes with temporal network patterns to identify covert channels precisely. LinguTimeX demonstrates strong effectiveness in detecting CTC across… More >

  • Open Access

    ARTICLE

    Explainable Machine Learning for Phishing Detection: Bridging Technical Efficacy and Legal Accountability in Cyberspace Security

    MD Hamid Borkot Tulla1,*, MD Moniur Rahman Ratan2, Rashid MD Mamunur3, Abdullah Hil Safi Sohan4, MD Matiur Rahman5

    Journal of Cyber Security, Vol.7, pp. 675-691, 2025, DOI:10.32604/jcs.2025.074737 - 24 December 2025

    Abstract Phishing is considered one of the most widespread cybercrimes due to the fact that it combines both technical and human vulnerabilities with the intention of stealing sensitive information. Traditional blacklist and heuristic-based defenses fail to detect such emerging attack patterns; hence, intelligent and transparent detection systems are needed. This paper proposes an explainable machine learning framework that integrates predictive performance with regulatory accountability. Four models were trained and tested on a balanced dataset of 10,000 URLs, comprising 5000 phishing and 5000 legitimate samples, each characterized by 48 lexical and content-based features: Decision Tree, XGBoost, Logistic… More >

  • Open Access

    REVIEW

    Next-Generation Lightweight Explainable AI for Cybersecurity: A Review on Transparency and Real-Time Threat Mitigation

    Khulud Salem Alshudukhi1,*, Sijjad Ali2, Mamoona Humayun3,*, Omar Alruwaili4

    CMES-Computer Modeling in Engineering & Sciences, Vol.145, No.3, pp. 3029-3085, 2025, DOI:10.32604/cmes.2025.073705 - 23 December 2025

    Abstract Problem: The integration of Artificial Intelligence (AI) into cybersecurity, while enhancing threat detection, is hampered by the “black box” nature of complex models, eroding trust, accountability, and regulatory compliance. Explainable AI (XAI) aims to resolve this opacity but introduces a critical new vulnerability: the adversarial exploitation of model explanations themselves. Gap: Current research lacks a comprehensive synthesis of this dual role of XAI in cybersecurity—as both a tool for transparency and a potential attack vector. There is a pressing need to systematically analyze the trade-offs between interpretability and security, evaluate defense mechanisms, and outline a… More >

Displaying 1-10 on page 1 of 51. Per Page