Open Access
ARTICLE
Privacy-Preserving Large-Scale AI Models for Intelligent Railway Transportation Systems: Hierarchical Poisoning Attacks and Defenses in Federated Learning
1 School of Automation and Intelligence, Beijing Jiaotong University, Beijing, 100044, China
2 Institute of Computing Technologies, China Academy of Railway Sciences Corporation Limited, Beijing, 100081, China
3 School of Computer Science and Technology, Beijing Jiaotong University, Beijing, 100044, China
4 Beijing Key Laboratory of Security and Privacy in Intelligent Transportation, Beijing Jiaotong University, Beijing, 100044, China
5 Institute of Infrastructure Inspection, China Academy of Railway Sciences Corporation Limited, Beijing, 100081, China
6 Zhejiang Key Laboratory of Multi-Dimensional Perception Technology, Application and Cybersecurity, Hangzhou, 310053, China
* Corresponding Author: Yongsheng Zhu. Email:
(This article belongs to the Special Issue: Privacy-Preserving Technologies for Large-scale Artificial Intelligence)
Computer Modeling in Engineering & Sciences 2024, 141(2), 1305-1325. https://doi.org/10.32604/cmes.2024.054820
Received 08 June 2024; Accepted 29 July 2024; Issue published 27 September 2024
Abstract
The development of Intelligent Railway Transportation Systems necessitates incorporating privacy-preserving mechanisms into AI models to protect sensitive information and enhance system efficiency. Federated learning offers a promising solution by allowing multiple clients to train models collaboratively without sharing private data. However, despite its privacy benefits, federated learning systems are vulnerable to poisoning attacks, where adversaries alter local model parameters on compromised clients and send malicious updates to the server, potentially compromising the global model’s accuracy. In this study, we introduce PMM (Perturbation coefficient Multiplied by Maximum value), a new poisoning attack method that perturbs model updates layer by layer, demonstrating the threat of poisoning attacks faced by federated learning. Extensive experiments across three distinct datasets have demonstrated PMM’s ability to significantly reduce the global model’s accuracy. Additionally, we propose an effective defense method, namely CLBL (Cluster Layer By Layer). Experiment results on three datasets have confirmed CLBL’s effectiveness.Keywords
Cite This Article
This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.