Open Access
ARTICLE
SensFL: Privacy-Preserving Vertical Federated Learning with Sensitive Regularization
1 School of Automation and Intelligence, Beijing Jiaotong University, Beijing, 100044, China
2 Shuohuang Railway Development Co., Ltd., National Energy Group, Cangzhou, 062350, China
3 Beijing Key Laboratory of Security and Privacy in Intelligent Transportation, School of Computer Science and Technology, Beijing Jiaotong University, Beijing, 100044, China
* Corresponding Author: Chongzhen Zhang. Email:
(This article belongs to the Special Issue: Information Security and Trust Issues in the Digital World)
Computer Modeling in Engineering & Sciences 2025, 142(1), 385-404. https://doi.org/10.32604/cmes.2024.055596
Received 02 July 2024; Accepted 10 October 2024; Issue published 17 December 2024
Abstract
In the realm of Intelligent Railway Transportation Systems, effective multi-party collaboration is crucial due to concerns over privacy and data silos. Vertical Federated Learning (VFL) has emerged as a promising approach to facilitate such collaboration, allowing diverse entities to collectively enhance machine learning models without the need to share sensitive training data. However, existing works have highlighted VFL’s susceptibility to privacy inference attacks, where an honest but curious server could potentially reconstruct a client’s raw data from embeddings uploaded by the client. This vulnerability poses a significant threat to VFL-based intelligent railway transportation systems. In this paper, we introduce SensFL, a novel privacy-enhancing method to against privacy inference attacks in VFL. Specifically, SensFL integrates regularization of the sensitivity of embeddings to the original data into the model training process, effectively limiting the information contained in shared embeddings. By reducing the sensitivity of embeddings to the original data, SensFL can effectively resist reverse privacy attacks and prevent the reconstruction of the original data from the embeddings. Extensive experiments were conducted on four distinct datasets and three different models to demonstrate the efficacy of SensFL. Experiment results show that SensFL can effectively mitigate privacy inference attacks while maintaining the accuracy of the primary learning task. These results underscore SensFL’s potential to advance privacy protection technologies within VFL-based intelligent railway systems, addressing critical security concerns in collaborative learning environments.Keywords
Cite This Article
This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.