Submission Deadline: 30 September 2025 View: 49 Submit to Special Issue
Prof. Athanasios Karlis
Email: akarlis@ee.duth.gr
Affiliation: Department of Electrical and Computer Engineering, Democritus University of Thrace, VASILISIS SOFIAS 12, XANTHI, 67100, Greece
Research Interests: Diagnostics in electrical machines and drives, renewable energy sources, and electrical power systems
Prof. Jose Antonino-Daviu
Email: joanda@die.upv.es
Affiliation: Escuela Tcnica Superior de Ingeniera Industrial Universitat Politecnica de Valencia, Algirs, 46022, Spain
Research Interests: Induction Motor, Short-time Fourier Transform, Fast Fourier Transform, Healthy Conditions, Stator Current, Electrical Engineering, Stray Flux, Stray Flux Signals, Synchronous Motor, Fault Diagnosis, Severe Defects, Current Spectrum
Electric machines are fundamental components in industrial, energy, and transportation systems, and ensuring their reliability, efficiency, and fault tolerance is essential for minimizing downtime, operational costs, and safety risks for personnel. The emergence of Industry 4.0 has led to the widespread implementation of AI-driven predictive maintenance and data-driven fault diagnostics. Machine Learning (ML) and Artificial Intelligence (AI) have enabled real-time condition monitoring, anomaly detection, and Remaining Useful Life (RUL) estimation, significantly improving fault prevention strategies. These advancements have allowed industries to move from traditional reactive and scheduled maintenance toward proactive and predictive maintenance methodologies, reducing unexpected failures and optimizing machine performance.
However, despite these advancements, a major challenge remains, the lack of transparency and interpretability in AI-based models. Advanced AI-driven fault detection and optimization techniques operate as black-box models, making it difficult for engineers and operators to understand why a specific decision is made. This issue raises concerns regarding trust, accountability, regulatory compliance, and overall adoption in critical industrial applications. The inability to explain AI decisions limits human oversight and poses critical challenges.
To address those issues, Industry 5.0 extends beyond Industry 4.0 by emphasizing a human-centric, sustainable, and resilient industrial paradigm. The role of Explainable AI (XAI) becomes crucial in bridging the gap between high-performance AI models and practical, interpretable, and transparent decision-making frameworks. By integrating explainability into predictive maintenance and optimization strategies, industries can ensure greater trust, collaboration between AI and human experts, and ethical AI deployment in real-world applications.