Open Access iconOpen Access

ARTICLE

crossmark

A Secure and Effective Energy-Aware Fixed-Point Quantization Scheme for Asynchronous Federated Learning

by Zerui Zhen1, Zihao Wu2, Lei Feng1,*, Wenjing Li1, Feng Qi1, Shixuan Guo1

1 Beijing University of Posts and Telecommunication, Beijing, 100876, China
2 Vanderbilt University, Nashville TN, 37240, USA

* Corresponding Author: Lei Feng. Email: email

Computers, Materials & Continua 2023, 75(2), 2939-2955. https://doi.org/10.32604/cmc.2023.036505

Abstract

Asynchronous federated learning (AsynFL) can effectively mitigate the impact of heterogeneity of edge nodes on joint training while satisfying participant user privacy protection and data security. However, the frequent exchange of massive data can lead to excess communication overhead between edge and central nodes regardless of whether the federated learning (FL) algorithm uses synchronous or asynchronous aggregation. Therefore, there is an urgent need for a method that can simultaneously take into account device heterogeneity and edge node energy consumption reduction. This paper proposes a novel Fixed-point Asynchronous Federated Learning (FixedAsynFL) algorithm, which could mitigate the resource consumption caused by frequent data communication while alleviating the effect of device heterogeneity. FixedAsynFL uses fixed-point quantization to compress the local and global models in AsynFL. In order to balance energy consumption and learning accuracy, this paper proposed a quantization scale selection mechanism. This paper examines the mathematical relationship between the quantization scale and energy consumption of the computation/communication process in the FixedAsynFL. Based on considering the upper bound of quantization noise, this paper optimizes the quantization scale by minimizing communication and computation consumption. This paper performs pertinent experiments on the MNIST dataset with several edge nodes of different computing efficiency. The results show that the FixedAsynFL algorithm with an 8-bit quantization can significantly reduce the communication data size by 81.3% and save the computation energy in the training phase by 74.9% without significant loss of accuracy. According to the experimental results, we can see that the proposed AsynFixedFL algorithm can effectively solve the problem of device heterogeneity and energy consumption limitation of edge nodes.

Keywords


Cite This Article

APA Style
Zhen, Z., Wu, Z., Feng, L., Li, W., Qi, F. et al. (2023). A secure and effective energy-aware fixed-point quantization scheme for asynchronous federated learning. Computers, Materials & Continua, 75(2), 2939-2955. https://doi.org/10.32604/cmc.2023.036505
Vancouver Style
Zhen Z, Wu Z, Feng L, Li W, Qi F, Guo S. A secure and effective energy-aware fixed-point quantization scheme for asynchronous federated learning. Comput Mater Contin. 2023;75(2):2939-2955 https://doi.org/10.32604/cmc.2023.036505
IEEE Style
Z. Zhen, Z. Wu, L. Feng, W. Li, F. Qi, and S. Guo, “A Secure and Effective Energy-Aware Fixed-Point Quantization Scheme for Asynchronous Federated Learning,” Comput. Mater. Contin., vol. 75, no. 2, pp. 2939-2955, 2023. https://doi.org/10.32604/cmc.2023.036505



cc Copyright © 2023 The Author(s). Published by Tech Science Press.
This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • 760

    View

  • 609

    Download

  • 1

    Like

Share Link