Home / Advanced Search

  • Title/Keywords

  • Author/Affliations

  • Journal

  • Article Type

  • Start Year

  • End Year

Update SearchingClear
  • Articles
  • Online
Search Results (6)
  • Open Access

    ARTICLE

    Machine Learning-Driven Classification for Enhanced Rule Proposal Framework

    B. Gomathi1,*, R. Manimegalai1, Srivatsan Santhanam2, Atreya Biswas3

    Computer Systems Science and Engineering, Vol.48, No.6, pp. 1749-1765, 2024, DOI:10.32604/csse.2024.056659 - 22 November 2024

    Abstract In enterprise operations, maintaining manual rules for enterprise processes can be expensive, time-consuming, and dependent on specialized domain knowledge in that enterprise domain. Recently, rule-generation has been automated in enterprises, particularly through Machine Learning, to streamline routine tasks. Typically, these machine models are black boxes where the reasons for the decisions are not always transparent, and the end users need to verify the model proposals as a part of the user acceptance testing to trust it. In such scenarios, rules excel over Machine Learning models as the end-users can verify the rules and have more… More >

  • Open Access

    ARTICLE

    A Study on the Explainability of Thyroid Cancer Prediction: SHAP Values and Association-Rule Based Feature Integration Framework

    Sujithra Sankar1,*, S. Sathyalakshmi2

    CMC-Computers, Materials & Continua, Vol.79, No.2, pp. 3111-3138, 2024, DOI:10.32604/cmc.2024.048408 - 15 May 2024

    Abstract In the era of advanced machine learning techniques, the development of accurate predictive models for complex medical conditions, such as thyroid cancer, has shown remarkable progress. Accurate predictive models for thyroid cancer enhance early detection, improve resource allocation, and reduce overtreatment. However, the widespread adoption of these models in clinical practice demands predictive performance along with interpretability and transparency. This paper proposes a novel association-rule based feature-integrated machine learning model which shows better classification and prediction accuracy than present state-of-the-art models. Our study also focuses on the application of SHapley Additive exPlanations (SHAP) values as… More >

  • Open Access

    ARTICLE

    Explainable Conformer Network for Detection of COVID-19 Pneumonia from Chest CT Scan: From Concepts toward Clinical Explainability

    Mohamed Abdel-Basset1, Hossam Hawash1, Mohamed Abouhawwash2,3,*, S. S. Askar4, Alshaimaa A. Tantawy1

    CMC-Computers, Materials & Continua, Vol.78, No.1, pp. 1171-1187, 2024, DOI:10.32604/cmc.2023.044425 - 30 January 2024

    Abstract The early implementation of treatment therapies necessitates the swift and precise identification of COVID-19 pneumonia by the analysis of chest CT scans. This study aims to investigate the indispensable need for precise and interpretable diagnostic tools for improving clinical decision-making for COVID-19 diagnosis. This paper proposes a novel deep learning approach, called Conformer Network, for explainable discrimination of viral pneumonia depending on the lung Region of Infections (ROI) within a single modality radiographic CT scan. Firstly, an efficient U-shaped transformer network is integrated for lung image segmentation. Then, a robust transfer learning technique is introduced… More >

  • Open Access

    ARTICLE

    Explainable Artificial Intelligence-Based Model Drift Detection Applicable to Unsupervised Environments

    Yongsoo Lee, Yeeun Lee, Eungyu Lee, Taejin Lee*

    CMC-Computers, Materials & Continua, Vol.76, No.2, pp. 1701-1719, 2023, DOI:10.32604/cmc.2023.040235 - 30 August 2023

    Abstract Cybersecurity increasingly relies on machine learning (ML) models to respond to and detect attacks. However, the rapidly changing data environment makes model life-cycle management after deployment essential. Real-time detection of drift signals from various threats is fundamental for effectively managing deployed models. However, detecting drift in unsupervised environments can be challenging. This study introduces a novel approach leveraging Shapley additive explanations (SHAP), a widely recognized explainability technique in ML, to address drift detection in unsupervised settings. The proposed method incorporates a range of plots and statistical techniques to enhance drift detection reliability and introduces a… More >

  • Open Access

    REVIEW

    Explainable Rules and Heuristics in AI Algorithm Recommendation Approaches—A Systematic Literature Review and Mapping Study

    Francisco José García-Peñalvo*, Andrea Vázquez-Ingelmo, Alicia García-Holgado

    CMES-Computer Modeling in Engineering & Sciences, Vol.136, No.2, pp. 1023-1051, 2023, DOI:10.32604/cmes.2023.023897 - 06 February 2023

    Abstract The exponential use of artificial intelligence (AI) to solve and automated complex tasks has catapulted its popularity generating some challenges that need to be addressed. While AI is a powerful means to discover interesting patterns and obtain predictive models, the use of these algorithms comes with a great responsibility, as an incomplete or unbalanced set of training data or an unproper interpretation of the models’ outcomes could result in misleading conclusions that ultimately could become very dangerous. For these reasons, it is important to rely on expert knowledge when applying these methods. However, not every… More > Graphic Abstract

    Explainable Rules and Heuristics in AI Algorithm Recommendation Approaches—A Systematic Literature Review and Mapping Study

  • Open Access

    REVIEW

    Explainable Artificial Intelligence–A New Step towards the Trust in Medical Diagnosis with AI Frameworks: A Review

    Nilkanth Mukund Deshpande1,2, Shilpa Gite6,7,*, Biswajeet Pradhan3,4,5, Mazen Ebraheem Assiri4

    CMES-Computer Modeling in Engineering & Sciences, Vol.133, No.3, pp. 843-872, 2022, DOI:10.32604/cmes.2022.021225 - 03 August 2022

    Abstract Machine learning (ML) has emerged as a critical enabling tool in the sciences and industry in recent years. Today’s machine learning algorithms can achieve outstanding performance on an expanding variety of complex tasks–thanks to advancements in technique, the availability of enormous databases, and improved computing power. Deep learning models are at the forefront of this advancement. However, because of their nested nonlinear structure, these strong models are termed as “black boxes,” as they provide no information about how they arrive at their conclusions. Such a lack of transparencies may be unacceptable in many applications, such… More >

Displaying 1-10 on page 1 of 6. Per Page