Home / Journals / CMES / Online First / doi:10.32604/cmes.2024.055596
Special Issues
Table of Content

Open Access

ARTICLE

SensFL: Privacy-Preserving Vertical Federated Learning with Sensitive Regularization

Chongzhen Zhang1,2,*, Zhichen Liu3, Xiangrui Xu3, Fuqiang Hu3, Jiao Dai3, Baigen Cai1, Wei Wang3
1 School of Automation and Intelligence, Beijing Jiaotong University, Beijing, 100044, China
2 Shuohuang Railway Development Co., Ltd., National Energy Group, Cangzhou, 062350, China
3 Beijing Key Laboratory of Security and Privacy in Intelligent Transportation, School of Computer Science and Technology, Beijing Jiaotong University, Beijing, 100044, China
* Corresponding Author: Chongzhen Zhang. Email: email
(This article belongs to the Special Issue: Information Security and Trust Issues in the Digital World)

Computer Modeling in Engineering & Sciences https://doi.org/10.32604/cmes.2024.055596

Received 02 July 2024; Accepted 10 October 2024; Published online 20 November 2024

Abstract

In the realm of Intelligent Railway Transportation Systems, effective multi-party collaboration is crucial due to concerns over privacy and data silos. Vertical Federated Learning (VFL) has emerged as a promising approach to facilitate such collaboration, allowing diverse entities to collectively enhance machine learning models without the need to share sensitive training data. However, existing works have highlighted VFL’s susceptibility to privacy inference attacks, where an honest but curious server could potentially reconstruct a client’s raw data from embeddings uploaded by the client. This vulnerability poses a significant threat to VFL-based intelligent railway transportation systems. In this paper, we introduce SensFL, a novel privacy-enhancing method to against privacy inference attacks in VFL. Specifically, SensFL integrates regularization of the sensitivity of embeddings to the original data into the model training process, effectively limiting the information contained in shared embeddings. By reducing the sensitivity of embeddings to the original data, SensFL can effectively resist reverse privacy attacks and prevent the reconstruction of the original data from the embeddings. Extensive experiments were conducted on four distinct datasets and three different models to demonstrate the efficacy of SensFL. Experiment results show that SensFL can effectively mitigate privacy inference attacks while maintaining the accuracy of the primary learning task. These results underscore SensFL’s potential to advance privacy protection technologies within VFL-based intelligent railway systems, addressing critical security concerns in collaborative learning environments.

Keywords

Vertical federated learning; privacy; defenses
  • 64

    View

  • 18

    Download

  • 0

    Like

Share Link