Submission Deadline: 30 September 2024 (closed) View: 1222
As artificial intelligence (AI) technology continues to penetrate and be applied in social, economic, and life fields, researchers have become increasingly concerned about the security issues of AI. Despite its immense potential, AI technology, particularly deep learning, is plagued by problems such as robustness, model backdoor, fairness, and privacy. Given the high complexity and difficulty in interpreting neural network models, detecting and defending against these security risks remains a significant challenge. This is particularly critical in safety-related fields such as aerospace, intelligent medicine, and unmanned aerial vehicles, where the credibility, reliability, and interpretability of AI are of utmost importance. Thus, ensuring the safety of AI has become a crucial trend and hotspot of research both domestically and abroad.
This special issue aims to bring together the latest security research on Security, Privacy, and Robustness techniques for trustworthy AI systems. We also welcome the authors to introduce other recent advances addressing the above issues.
Potential topics include but are not limited to:
Attack and defense technology of AI systems
Explainable AI and interpretability
Fairness, bias, and discrimination in AI systems
Privacy and data protection in AI systems
Security and privacy in federated learning
Robustness in federated learning model
Automated verification and testing of AI systems
Fuzzy testing technology for AI Systems
Privacy risk assessment technology for AI systems
Application of AI in software engineering and information security