Open Access iconOpen Access

ARTICLE

SensFL: Privacy-Preserving Vertical Federated Learning with Sensitive Regularization

by Chongzhen Zhang1,2,*, Zhichen Liu3, Xiangrui Xu3, Fuqiang Hu3, Jiao Dai3, Baigen Cai1, Wei Wang3

1 School of Automation and Intelligence, Beijing Jiaotong University, Beijing, 100044, China
2 Shuohuang Railway Development Co., Ltd., National Energy Group, Cangzhou, 062350, China
3 Beijing Key Laboratory of Security and Privacy in Intelligent Transportation, School of Computer Science and Technology, Beijing Jiaotong University, Beijing, 100044, China

* Corresponding Author: Chongzhen Zhang. Email: email

(This article belongs to the Special Issue: Information Security and Trust Issues in the Digital World)

Computer Modeling in Engineering & Sciences 2025, 142(1), 385-404. https://doi.org/10.32604/cmes.2024.055596

Abstract

In the realm of Intelligent Railway Transportation Systems, effective multi-party collaboration is crucial due to concerns over privacy and data silos. Vertical Federated Learning (VFL) has emerged as a promising approach to facilitate such collaboration, allowing diverse entities to collectively enhance machine learning models without the need to share sensitive training data. However, existing works have highlighted VFL’s susceptibility to privacy inference attacks, where an honest but curious server could potentially reconstruct a client’s raw data from embeddings uploaded by the client. This vulnerability poses a significant threat to VFL-based intelligent railway transportation systems. In this paper, we introduce SensFL, a novel privacy-enhancing method to against privacy inference attacks in VFL. Specifically, SensFL integrates regularization of the sensitivity of embeddings to the original data into the model training process, effectively limiting the information contained in shared embeddings. By reducing the sensitivity of embeddings to the original data, SensFL can effectively resist reverse privacy attacks and prevent the reconstruction of the original data from the embeddings. Extensive experiments were conducted on four distinct datasets and three different models to demonstrate the efficacy of SensFL. Experiment results show that SensFL can effectively mitigate privacy inference attacks while maintaining the accuracy of the primary learning task. These results underscore SensFL’s potential to advance privacy protection technologies within VFL-based intelligent railway systems, addressing critical security concerns in collaborative learning environments.

Keywords


Cite This Article

APA Style
Zhang, C., Liu, Z., Xu, X., Hu, F., Dai, J. et al. (2025). Sensfl: privacy-preserving vertical federated learning with sensitive regularization. Computer Modeling in Engineering & Sciences, 142(1), 385-404. https://doi.org/10.32604/cmes.2024.055596
Vancouver Style
Zhang C, Liu Z, Xu X, Hu F, Dai J, Cai B, et al. Sensfl: privacy-preserving vertical federated learning with sensitive regularization. Comput Model Eng Sci. 2025;142(1):385-404 https://doi.org/10.32604/cmes.2024.055596
IEEE Style
C. Zhang et al., “SensFL: Privacy-Preserving Vertical Federated Learning with Sensitive Regularization,” Comput. Model. Eng. Sci., vol. 142, no. 1, pp. 385-404, 2025. https://doi.org/10.32604/cmes.2024.055596



cc Copyright © 2025 The Author(s). Published by Tech Science Press.
This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • 340

    View

  • 93

    Download

  • 0

    Like

Share Link