Home / Journals / CMES / Online First / doi:10.32604/cmes.2024.054820
Special Issues
Table of Content

Open Access

ARTICLE

Privacy-Preserving Large-Scale AI Models for Intelligent Railway Transportation Systems: Hierarchical Poisoning Attacks and Defenses in Federated Learning

Yongsheng Zhu1,2,*, Chong Liu3,4, Chunlei Chen5, Xiaoting Lyu3,4, Zheng Chen3,4, Bin Wang6, Fuqiang Hu3,4, Hanxi Li3,4, Jiao Dai3,4, Baigen Cai1, Wei Wang3,4
1 School of Automation and Intelligence, Beijing Jiaotong University, Beijing, 100044, China
2 Institute of Computing Technologies, China Academy of Railway Sciences Corporation Limited, Beijing, 100081, China
3 School of Computer Science and Technology, Beijing Jiaotong University, Beijing, 100044, China
4 Beijing Key Laboratory of Security and Privacy in Intelligent Transportation, Beijing Jiaotong University, Beijing, 100044, China
5 Institute of Infrastructure Inspection, China Academy of Railway Sciences Corporation Limited, Beijing, 100081, China
6 Zhejiang Key Laboratory of Multi-Dimensional Perception Technology, Application and Cybersecurity, Hangzhou, 310053, China
* Corresponding Author: Yongsheng Zhu. Email: email
(This article belongs to the Special Issue: Privacy-Preserving Technologies for Large-scale Artificial Intelligence)

Computer Modeling in Engineering & Sciences https://doi.org/10.32604/cmes.2024.054820

Received 08 June 2024; Accepted 29 July 2024; Published online 19 September 2024

Abstract

The development of Intelligent Railway Transportation Systems necessitates incorporating privacy-preserving mechanisms into AI models to protect sensitive information and enhance system efficiency. Federated learning offers a promising solution by allowing multiple clients to train models collaboratively without sharing private data. However, despite its privacy benefits, federated learning systems are vulnerable to poisoning attacks, where adversaries alter local model parameters on compromised clients and send malicious updates to the server, potentially compromising the global model’s accuracy. In this study, we introduce PMM (Perturbation coefficient Multiplied by Maximum value), a new poisoning attack method that perturbs model updates layer by layer, demonstrating the threat of poisoning attacks faced by federated learning. Extensive experiments across three distinct datasets have demonstrated PMM’s ability to significantly reduce the global model’s accuracy. Additionally, we propose an effective defense method, namely CLBL (Cluster Layer By Layer). Experiment results on three datasets have confirmed CLBL’s effectiveness.

Keywords

Privacy-preserving; intelligent railway transportation system; federated learning; poisoning attacks; defenses
  • 86

    View

  • 19

    Download

  • 0

    Like

Share Link