Guest Editors
Prof. Huchang Liao, Sichuan University, China
Prof. Xiaobin Xu, Hangzhou Dianzi University, China
Prof. Jiang Jiang, National University of Defense Technology, China
Dr. Guilan Kong, Peking University, China
Summary
Artificial intelligence (AI) is an intelligent tool that assists human agents in decision making, which has been widely applied in fields such as engineering, healthcare, e-commerce, and finance. The explainability of AI directly affects the feasibility of practical applications of AI systems, and the demand is particularly strong in multi-regulatory and high-risk fields such as healthcare, finance, and information security. In order to improve the explainability of AI, enhance its applicability, and enhance users' trust in decision-making results, research has combined AI with reasoning and decision-making methodologies.
In the information age of data explosion, another important factor affecting the practical applications of AI is the uncertainty of massive historical data, including incomplete data, ambiguous data representation, and unclear data reliability. Evidential reasoning is a theoretical tool for processing uncertain data, which is a typical method for intelligent reasoning and data fusion of uncertain information. So far, various evidence reasoning based AI systems, such as belief rule-base inference methodology using the evidential reasoning approach (RIMER), have been proposed to address different practical problems with uncertainty. The research and applications of intelligent reasoning and decision-making methods based on empirical reasoning have received widespread attention from researchers in the past, and significant research progress has been made. However, in the context of big data, decision-making problems are usually complex. For example, medical aided diagnosis is an example of decision-making, which has a large amount of uncertain data information and high explainability requirements for reasoning processes to gain the trust of doctors. Further research is needed on how to explore data-driven intelligent reasoning and decision-making methodologies and applications in the era of big data.
This special issue aims to solicit original research on the latest theoretical and application innovation in intelligent reasoning and decision-making methodologies and their applications to enhance explainable AI in different areas. We welcome the original innovation and applications of intelligent reasoning and decision-making methodologies that contribute to exploring the information patterns of massive data, improving the explainability of AI, and improving the efficiency of reasoning and decision-making in practical applications. Potential topics of interest include, but are not restricted to the following:
Multiple criteria decision-making technologies for data-driven explainable AI
Group decision-making technologies for data-driven explainable AI
Large-scale group decision-making technologies for data-driven explainable AI
Dempster-Shafer theory and extended technologies for data-driven explainable AI
Evidential reasoning approach and extensions based data-driven explicable artificial intelligence
Belief rule-base inference technologies for data-driven explainable AI
Data-driven explicable artificial intelligence under uncertainty
Fuzzy decision-making technologies for data-driven explainable AI
Innovation and applications of data-driven explainable AI in engineering
Innovation and applications of data-driven explainable AI in management.
Innovation and applications of data-driven explainable AI in healthcare
Innovation and applications of data-driven explainable AI in computer science
Published Papers