Open Access
ARTICLE
Byzantine Robust Federated Learning Scheme Based on Backdoor Triggers
School of Computer and Communication Engineering, Changsha University of Science and Technology, Changsha, 410114, China
* Corresponding Author: Ke Gu. Email:
Computers, Materials & Continua 2024, 79(2), 2813-2831. https://doi.org/10.32604/cmc.2024.050025
Received 25 January 2024; Accepted 10 April 2024; Issue published 15 May 2024
Abstract
Federated learning is widely used to solve the problem of data decentralization and can provide privacy protection for data owners. However, since multiple participants are required in federated learning, this allows attackers to compromise. Byzantine attacks pose great threats to federated learning. Byzantine attackers upload maliciously created local models to the server to affect the prediction performance and training speed of the global model. To defend against Byzantine attacks, we propose a Byzantine robust federated learning scheme based on backdoor triggers. In our scheme, backdoor triggers are embedded into benign data samples, and then malicious local models can be identified by the server according to its validation dataset. Furthermore, we calculate the adjustment factors of local models according to the parameters of their final layers, which are used to defend against data poisoning-based Byzantine attacks. To further enhance the robustness of our scheme, each local model is weighted and aggregated according to the number of times it is identified as malicious. Relevant experimental data show that our scheme is effective against Byzantine attacks in both independent identically distributed (IID) and non-independent identically distributed (non-IID) scenarios.Keywords
Cite This Article
This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.