Submission Deadline: 31 October 2025 View: 64 Submit to Special Issue
Prof. Weizhi Meng
Email: w.meng3@lancaster.ac.uk
Affiliation: Department of Computing and Communications, Lancaster University, Lancaster, LA1 4YW, UK
Research Interests: blockchain, AI, security
Dr. Chunhua Su
Email: chsu@u-aizu.ac.jp
Affiliation: Division of Computer Science, University of Aizu, Aizuwakamatsu , 965-8580, Japan
Research Interests: cryptography and secret sharing, IoT
Dr. Chao Chen
Email: chao.chen@rmit.edu.au
Affiliation: Department of Accounting, Info Sys & Supply Chain, RMIT University, Melbourne, 3000, Australia
Research Interests: cybersecurity and artificial intelligence
The great promise of AI-enabled systems is that they can improve efficiency, drive innovation, solve complex problems, and have a profound impact on the economy and society. However, realizing this potential also requires addressing technical, ethical, and societal challenges. Through the proper development and deployment of AI technology, we can create a more intelligent, efficient, and sustainable future.
However, these AI-enabled systems may suffer various security and privacy issues, including data privacy leakage, adversarial attacks, model theft, data poisoning, model bias, etc. Solving these problems requires technical and ethical efforts to ensure the security, reliability and fairness of AI systems. This special issue aims to bring together the latest research on Security, Privacy, and Robustness techniques for building privacy-preserving, secure and trustworthy AI systems.
Potential topics include but are not limited to:
· Attack and defense technology of AI-enabled systems
· Explainable AI and interpretability
· Intrusion detection and prevention in AI-enabled systems
· Privacy and data protection in AI-enabled systems
· Blockchain technology in AI-enabled systems
· Robustness in AI models
· Automated verification and testing of AI systems
· Fuzzy testing technology for AI Systems
· Privacy risk assessment technology for AI-enabled systems