Special Issues
Table of Content

Next-Generation AI for Ethical and Explainable Decision-Making in Critical Systems

Submission Deadline: 01 June 2025 View: 548 Submit to Special Issue

Guest Editors

Assc. Prof. Ishaani Priyadarshini

Email: ip256@cornell.edu; i.priyadarsini@edmonds.edu

Affiliation: Data Science, Cornell University, Ithaca, USA; Edmonds College, Washington, USA

Homepage: 

Research Interests: Cybersecurity, Artificial Intelligence, and Machine Learning

图片1.png


Summary

The rapid advancement of artificial intelligence (AI) has led to its pervasive integration into various critical systems, such as healthcare, finance, transportation, and defense. These systems often operate in high-stakes environments where decision-making accuracy, transparency, and ethical considerations are paramount. Traditional AI models, while powerful, have faced challenges in providing explainable and ethically sound decisions, especially in complex scenarios. As AI continues to evolve, there is a pressing need to develop next-generation AI systems that prioritize ethical considerations and provide clear, interpretable, and trustworthy decision-making processes.  This special issue aims to explore the development and application of next-generation AI technologies that are specifically designed to enhance ethical standards and provide explainable decision-making in critical systems. By focusing on cutting-edge research and innovative approaches, this special issue seeks to bridge the gap between AI advancements and the ethical and practical requirements of real-world critical systems. The goal is to foster a deeper understanding of how AI can be responsibly integrated into critical applications, ensuring that these systems not only perform at high levels but also adhere to the highest ethical standards. This special issue invites original research articles and comprehensive reviews that address the challenges and opportunities associated with developing ethical and explainable AI for critical systems. We encourage submissions that cover a broad range of topics, including but not limited to:

● Development of ethical AI frameworks and guidelines for critical systems.

● Explainable AI models and techniques for enhancing transparency in decision-making.

● Integration of AI ethics into the design and deployment of autonomous systems.

● Methods for mitigating bias and ensuring fairness in AI-driven decision-making processes.

● Case studies of AI applications in safety-critical domains such as healthcare, finance, and transportation.

● Human-AI collaboration strategies to improve decision-making in critical environments.

● Regulatory and compliance considerations for AI in critical systems.

● Trust and accountability mechanisms in AI-based decision support systems.

● Innovations in AI governance and management for critical applications.

● AI-driven decision-making in emergency response and disaster management.


This special issue aims to serve as a comprehensive platform for researchers and practitioners to share their latest findings and insights on ethical and explainable AI. By addressing these crucial aspects, we hope to contribute to the responsible development and deployment of AI technologies in critical systems, ensuring that they serve the greater good while upholding ethical standards and public trust.


Keywords

Ethical AI; Explainable AI (XAI); AI in critical systems; AI-driven decision-making; Transparent AI models; AI accountability; Bias mitigation in AI; AI fairness; Responsible AI; Trustworthy AI systems

Share Link