Guest Editors
Prof. Mu-Yen Chen, National Cheng Kung University, Taiwan
Dr. Mary Gladence, Sathyabama Institute of Science and Technology, India
Dr. Hsin-Te Wu, National Taitung University, Taiwan
Summary
When it comes to technological development, many countries nowadays take Artificial Intelligence (AI) to be critical areas of interest. Current AI systems can run many high-performance algorithms and recognition capabilities. However, there is concern over transparency in AI development because we can only know of input information and output results without having access to its entire computation process and other data. For instance, machine learning can find the optimal solution by running hundreds and thousands of tests through black-box optimization – but the computation all takes place in a black box, from which we are unable to trace and track each computational result decision-making rationale. Besides pursuing accuracy, we should also address how AI systems make their black box decisions, which calls for research into Explainable AI (XAI) by pursuing reverse engineering and self-explainability in AI. The key concept is to make the whole process of AI algorithms – from input, decision-making process to output results – all accessible and traceable. So that users and operators can utilize XAI to produce transparent explanations and reasons for the decisions made, reinforcing trust and confidence for an AI system’s reliability.
XAI can apply into various areas, including image recognition, medical imaging, weather forecast, and social networking. In image recognition, XAI can detect whether the subject of an image is in disguise and whether, when that disguise is removed, other features of that subject bear a resemblance to the identified person, which helps operators understand how some disguise relates to the similarity to a person. In medical imaging, an XAI model helps medical professionals analyze X-ray films of patients, understand each part of the analysis process, and swiftly reach a diagnosis. In social networking, XAI can apply into study human behavior and interpersonal relationships.
Given XAI’s importance and rich applications, it is a very worthwhile topic of research. For this special issue, our proposed goal is to address more than just XAI algorithms. We hope to explore XAI applications and researches in more areas of study and see how XAI models can take a vast amount of available data and help us discover undiscovered phenomena, retrieve useful knowledge, and draw conclusions and reasoning. This special issue will encourage scholars and tech professionals across different disciplines to propose new XAI frameworks, innovative approaches, and novel applications.
Keywords
Theoretical models, architecture, protocols, and frameworks for the XAI
Human-centric System for XAI
Knowledge representation for XAI
Reverse Engineering for XAI
Natural Language System for XAI
Computer vision for XAI
eHealth for XAI
Explainable Black-box for XAI
Security and Privacy Concerns for XAI
Use case
Published Papers