Home / Advanced Search

  • Title/Keywords

  • Author/Affliations

  • Journal

  • Article Type

  • Start Year

  • End Year

Update SearchingClear
  • Articles
  • Online
Search Results (12)
  • Open Access

    ARTICLE

    Enhancing Septic Shock Detection through Interpretable Machine Learning

    Md Mahfuzur Rahman1,*, Md Solaiman Chowdhury2, Mohammad Shorfuzzaman3, Lutful Karim4, Md Shafiullah5, Farag Azzedin1

    CMES-Computer Modeling in Engineering & Sciences, Vol.141, No.3, pp. 2501-2525, 2024, DOI:10.32604/cmes.2024.055065 - 31 October 2024

    Abstract This article presents an innovative approach that leverages interpretable machine learning models and cloud computing to accelerate the detection of septic shock by analyzing electronic health data. Unlike traditional methods, which often lack transparency in decision-making, our approach focuses on early detection, offering a proactive strategy to mitigate the risks of sepsis. By integrating advanced machine learning algorithms with interpretability techniques, our method not only provides accurate predictions but also offers clear insights into the factors influencing the model’s decisions. Moreover, we introduce a preference-based matching algorithm to evaluate disease severity, enabling timely interventions guided… More >

  • Open Access

    ARTICLE

    Hyperspectral Image Based Interpretable Feature Clustering Algorithm

    Yaming Kang1,*, Peishun Ye1, Yuxiu Bai1, Shi Qiu2

    CMC-Computers, Materials & Continua, Vol.79, No.2, pp. 2151-2168, 2024, DOI:10.32604/cmc.2024.049360 - 15 May 2024

    Abstract Hyperspectral imagery encompasses spectral and spatial dimensions, reflecting the material properties of objects. Its application proves crucial in search and rescue, concealed target identification, and crop growth analysis. Clustering is an important method of hyperspectral analysis. The vast data volume of hyperspectral imagery, coupled with redundant information, poses significant challenges in swiftly and accurately extracting features for subsequent analysis. The current hyperspectral feature clustering methods, which are mostly studied from space or spectrum, do not have strong interpretability, resulting in poor comprehensibility of the algorithm. So, this research introduces a feature clustering algorithm for hyperspectral… More >

  • Open Access

    ARTICLE

    Explainable AI and Interpretable Model for Insurance Premium Prediction

    Umar Abdulkadir Isa*, Anil Fernando*

    Journal on Artificial Intelligence, Vol.5, pp. 31-42, 2023, DOI:10.32604/jai.2023.040213 - 11 August 2023

    Abstract Traditional machine learning metrics (TMLMs) are quite useful for the current research work precision, recall, accuracy, MSE and RMSE. Not enough for a practitioner to be confident about the performance and dependability of innovative interpretable model 85%–92%. We included in the prediction process, machine learning models (MLMs) with greater than 99% accuracy with a sensitivity of 95%–98% and specifically in the database. We need to explain the model to domain specialists through the MLMs. Human-understandable explanations in addition to ML professionals must establish trust in the prediction of our model. This is achieved by creating… More >

  • Open Access

    ARTICLE

    Safety Assessment of Liquid Launch Vehicle Structures Based on Interpretable Belief Rule Base

    Gang Xiang1,2, Xiaoyu Cheng3, Wei He3,4,*, Peng Han3

    Computer Systems Science and Engineering, Vol.47, No.1, pp. 273-298, 2023, DOI:10.32604/csse.2023.037892 - 26 May 2023

    Abstract A liquid launch vehicle is an important carrier in aviation, and its regular operation is essential to maintain space security. In the safety assessment of fluid launch vehicle body structure, it is necessary to ensure that the assessment model can learn self-response rules from various uncertain data and not differently to provide a traceable and interpretable assessment process. Therefore, a belief rule base with interpretability (BRB-i) assessment method of liquid launch vehicle structure safety status combines data and knowledge. Moreover, an innovative whale optimization algorithm with interpretable constraints is proposed. The experiments are carried out… More >

  • Open Access

    ARTICLE

    A Novel Computationally Efficient Approach to Identify Visually Interpretable Medical Conditions from 2D Skeletal Data

    Praveen Jesudhas1,*, T. Raghuveera2

    Computer Systems Science and Engineering, Vol.46, No.3, pp. 2995-3015, 2023, DOI:10.32604/csse.2023.036778 - 03 April 2023

    Abstract Timely identification and treatment of medical conditions could facilitate faster recovery and better health. Existing systems address this issue using custom-built sensors, which are invasive and difficult to generalize. A low-complexity scalable process is proposed to detect and identify medical conditions from 2D skeletal movements on video feed data. Minimal set of features relevant to distinguish medical conditions: AMF, PVF and GDF are derived from skeletal data on sampled frames across the entire action. The AMF (angular motion features) are derived to capture the angular motion of limbs during a specific action. The relative position… More >

  • Open Access

    ARTICLE

    A Processor Performance Prediction Method Based on Interpretable Hierarchical Belief Rule Base and Sensitivity Analysis

    Chen Wei-wei1, He Wei1,2,*, Zhu Hai-long1, Zhou Guo-hui1, Mu Quan-qi1, Han Peng1

    CMC-Computers, Materials & Continua, Vol.74, No.3, pp. 6119-6143, 2023, DOI:10.32604/cmc.2023.035743 - 28 December 2022

    Abstract The prediction of processor performance has important reference significance for future processors. Both the accuracy and rationality of the prediction results are required. The hierarchical belief rule base (HBRB) can initially provide a solution to low prediction accuracy. However, the interpretability of the model and the traceability of the results still warrant further investigation. Therefore, a processor performance prediction method based on interpretable hierarchical belief rule base (HBRB-I) and global sensitivity analysis (GSA) is proposed. The method can yield more reliable prediction results. Evidence reasoning (ER) is firstly used to evaluate the historical data of More >

  • Open Access

    ARTICLE

    An Interpretable CNN for the Segmentation of the Left Ventricle in Cardiac MRI by Real-Time Visualization

    Jun Liu1, Geng Yuan2, Changdi Yang2, Houbing Song3, Liang Luo4,*

    CMES-Computer Modeling in Engineering & Sciences, Vol.135, No.2, pp. 1571-1587, 2023, DOI:10.32604/cmes.2022.023195 - 27 October 2022

    Abstract The interpretability of deep learning models has emerged as a compelling area in artificial intelligence research. The safety criteria for medical imaging are highly stringent, and models are required for an explanation. However, existing convolutional neural network solutions for left ventricular segmentation are viewed in terms of inputs and outputs. Thus, the interpretability of CNNs has come into the spotlight. Since medical imaging data are limited, many methods to fine-tune medical imaging models that are popular in transfer models have been built using massive public ImageNet datasets by the transfer learning method. Unfortunately, this generates… More >

  • Open Access

    ARTICLE

    Detecting Deepfake Images Using Deep Learning Techniques and Explainable AI Methods

    Wahidul Hasan Abir1, Faria Rahman Khanam1, Kazi Nabiul Alam1, Myriam Hadjouni2, Hela Elmannai3, Sami Bourouis4, Rajesh Dey5, Mohammad Monirujjaman Khan1,*

    Intelligent Automation & Soft Computing, Vol.35, No.2, pp. 2151-2169, 2023, DOI:10.32604/iasc.2023.029653 - 19 July 2022

    Abstract Nowadays, deepfake is wreaking havoc on society. Deepfake content is created with the help of artificial intelligence and machine learning to replace one person’s likeness with another person in pictures or recorded videos. Although visual media manipulations are not new, the introduction of deepfakes has marked a breakthrough in creating fake media and information. These manipulated pictures and videos will undoubtedly have an enormous societal impact. Deepfake uses the latest technology like Artificial Intelligence (AI), Machine Learning (ML), and Deep Learning (DL) to construct automated methods for creating fake content that is becoming increasingly difficult… More >

  • Open Access

    ARTICLE

    An Interpretable Artificial Intelligence Based Smart Agriculture System

    Fariza Sabrina1,*, Shaleeza Sohail2, Farnaz Farid3, Sayka Jahan4, Farhad Ahamed5, Steven Gordon6

    CMC-Computers, Materials & Continua, Vol.72, No.2, pp. 3777-3797, 2022, DOI:10.32604/cmc.2022.026363 - 29 March 2022

    Abstract With increasing world population the demand of food production has increased exponentially. Internet of Things (IoT) based smart agriculture system can play a vital role in optimising crop yield by managing crop requirements in real-time. Interpretability can be an important factor to make such systems trusted and easily adopted by farmers. In this paper, we propose a novel artificial intelligence-based agriculture system that uses IoT data to monitor the environment and alerts farmers to take the required actions for maintaining ideal conditions for crop production. The strength of the proposed system is in its interpretability… More >

  • Open Access

    ARTICLE

    Interpretable and Adaptable Early Warning Learning Analytics Model

    Shaleeza Sohail1, Atif Alvi2,*, Aasia Khanum3

    CMC-Computers, Materials & Continua, Vol.71, No.2, pp. 3211-3225, 2022, DOI:10.32604/cmc.2022.023560 - 07 December 2021

    Abstract Major issues currently restricting the use of learning analytics are the lack of interpretability and adaptability of the machine learning models used in this domain. Interpretability makes it easy for the stakeholders to understand the working of these models and adaptability makes it easy to use the same model for multiple cohorts and courses in educational institutions. Recently, some models in learning analytics are constructed with the consideration of interpretability but their interpretability is not quantified. However, adaptability is not specifically considered in this domain. This paper presents a new framework based on hybrid statistical More >

Displaying 1-10 on page 1 of 12. Per Page