Submission Deadline: 10 March 2023 Submit to Special Issue
Artificial Intelligence (AI) and Machine Learning (ML) developments are having a big impact on people's lives, affecting millions of people's health, safety, education, and other opportunities that are meant to be shared by everyone. AI is ushering in a paradigm shift in healthcare, owing to the growing availability of structured and unstructured data and the rapid development of big data analytic methods. These clinical data can take the form of demographics, medical notes, electronic recordings from medical devices (sensors), physical examinations, clinical laboratory results, and photographs, among other things. ML algorithms are already allowing humans to gain unprecedented insights into diagnosing diseases based on histopathological examination or medical imaging, detecting malignant tumors in radiological images, detecting malignancy from photographs of skin lesions, discovering new drugs, determining treatment variability and patient outcomes, and guiding researchers on how to construct cohorts for co-registration.
Existing deep learning models are less interpretable, i.e., they don't provide explanations or make reliable predictions. There are also a number of other obstacles, such as ethical, legal, societal, and technological issues with current AI. AI technologies based on Deep Learning (DL) that are trustworthy and explainable are an emerging topic of research with a lot of promise for improving high-quality healthcare. It refers to AI/DL tools and approaches that offer human-comprehensible answers, such as explanations and interpretations for disease diagnosis and forecasts, as well as suggested actions.
This special issue invites and encourages new studies and research on interpretable AI-ML approaches for producing human-readable explanations. The major research investigations invited under this special issue are having following goals:
• To improve trust and minimize analysis bias.
• To promote discussion on system designs.
• To employ and assess novel explainable AI to improve the accuracy of pathology workflows for disease diagnostic and prognostic purposes.
• Researchers working on practical use cases of trustworthy AI models are invited to explain how to add a layer of interpretability and trust to powerful algorithms like neural networks and ensemble methods for delivering near real-time intelligence.