Special Issues

Intelligent reasoning and decision-making towards the explainability of AI

Submission Deadline: 28 February 2024 (closed) View: 116

Guest Editors

Prof. Huchang Liao, Sichuan University, China
Prof. Xiaobin Xu, Hangzhou Dianzi University, China
Prof. Jiang Jiang, National University of Defense Technology, China
Dr. Guilan Kong, Peking University, China

Summary

Artificial intelligence (AI) is an intelligent tool that assists human agents in decision making, which has been widely applied in fields such as engineering, healthcare, e-commerce, and finance. The explainability of AI directly affects the feasibility of practical applications of AI systems, and the demand is particularly strong in multi-regulatory and high-risk fields such as healthcare, finance, and information security. In order to improve the explainability of AI, enhance its applicability, and enhance users' trust in decision-making results, research has combined AI with reasoning and decision-making methodologies.

 

In the information age of data explosion, another important factor affecting the practical applications of AI is the uncertainty of massive historical data, including incomplete data, ambiguous data representation, and unclear data reliability. Evidential reasoning is a theoretical tool for processing uncertain data, which is a typical method for intelligent reasoning and data fusion of uncertain information. So far, various evidence reasoning based AI systems, such as belief rule-base inference methodology using the evidential reasoning approach (RIMER), have been proposed to address different practical problems with uncertainty. The research and applications of intelligent reasoning and decision-making methods based on empirical reasoning have received widespread attention from researchers in the past, and significant research progress has been made. However, in the context of big data, decision-making problems are usually complex. For example, medical aided diagnosis is an example of decision-making, which has a large amount of uncertain data information and high explainability requirements for reasoning processes to gain the trust of doctors. Further research is needed on how to explore data-driven intelligent reasoning and decision-making methodologies and applications in the era of big data.

 

This special issue aims to solicit original research on the latest theoretical and application innovation in intelligent reasoning and decision-making methodologies and their applications to enhance explainable AI in different areas. We welcome the original innovation and applications of intelligent reasoning and decision-making methodologies that contribute to exploring the information patterns of massive data, improving the explainability of AI, and improving the efficiency of reasoning and decision-making in practical applications. Potential topics of interest include, but are not restricted to the following:

 

Multiple criteria decision-making technologies for data-driven explainable AI

Group decision-making technologies for data-driven explainable AI

Large-scale group decision-making technologies for data-driven explainable AI

Dempster-Shafer theory and extended technologies for data-driven explainable AI

Evidential reasoning approach and extensions based data-driven explicable artificial intelligence

Belief rule-base inference technologies for data-driven explainable AI

Data-driven explicable artificial intelligence under uncertainty

Fuzzy decision-making technologies for data-driven explainable AI

Innovation and applications of data-driven explainable AI in engineering

Innovation and applications of data-driven explainable AI in management.

Innovation and applications of data-driven explainable AI in healthcare

Innovation and applications of data-driven explainable AI in computer science



Published Papers


  • Open Access

    ARTICLE

    A Health State Prediction Model Based on Belief Rule Base and LSTM for Complex Systems

    Yu Zhao, Zhijie Zhou, Hongdong Fan, Xiaoxia Han, Jie Wang, Manlin Chen
    Intelligent Automation & Soft Computing, Vol.39, No.1, pp. 73-91, 2024, DOI:10.32604/iasc.2024.042285
    (This article belongs to the Special Issue: Intelligent reasoning and decision-making towards the explainability of AI)
    Abstract In industrial production and engineering operations, the health state of complex systems is critical, and predicting it can ensure normal operation. Complex systems have many monitoring indicators, complex coupling structures, non-linear and time-varying characteristics, so it is a challenge to establish a reliable prediction model. The belief rule base (BRB) can fuse observed data and expert knowledge to establish a nonlinear relationship between input and output and has well modeling capabilities. Since each indicator of the complex system can reflect the health state to some extent, the BRB is built based on the causal relationship… More >

Share Link