Open Access iconOpen Access

ARTICLE

crossmark

FedAdaSS: Federated Learning with Adaptive Parameter Server Selection Based on Elastic Cloud Resources

Yuwei Xu, Baokang Zhao*, Huan Zhou, Jinshu Su

School of Computer, National University of Defense Technology, Changsha, 410000, China

* Corresponding Author: Baokang Zhao. Email: email

(This article belongs to the Special Issue: Advanced Security for Future Mobile Internet: A Key Challenge for the Digital Transformation)

Computer Modeling in Engineering & Sciences 2024, 141(1), 609-629. https://doi.org/10.32604/cmes.2024.053462

Abstract

The rapid expansion of artificial intelligence (AI) applications has raised significant concerns about user privacy, prompting the development of privacy-preserving machine learning (ML) paradigms such as federated learning (FL). FL enables the distributed training of ML models, keeping data on local devices and thus addressing the privacy concerns of users. However, challenges arise from the heterogeneous nature of mobile client devices, partial engagement of training, and non-independent identically distributed (non-IID) data distribution, leading to performance degradation and optimization objective bias in FL training. With the development of 5G/6G networks and the integration of cloud computing edge computing resources, globally distributed cloud computing resources can be effectively utilized to optimize the FL process. Through the specific parameters of the server through the selection mechanism, it does not increase the monetary cost and reduces the network latency overhead, but also balances the objectives of communication optimization and low engagement mitigation that cannot be achieved simultaneously in a single-server framework of existing works. In this paper, we propose the FedAdaSS algorithm, an adaptive parameter server selection mechanism designed to optimize the training efficiency in each round of FL training by selecting the most appropriate server as the parameter server. Our approach leverages the flexibility of cloud resource computing power, and allows organizers to strategically select servers for data broadcasting and aggregation, thus improving training performance while maintaining cost efficiency. The FedAdaSS algorithm estimates the utility of client systems and servers and incorporates an adaptive random reshuffling strategy that selects the optimal server in each round of the training process. Theoretical analysis confirms the convergence of FedAdaSS under strong convexity and L-smooth assumptions, and comparative experiments within the FLSim framework demonstrate a reduction in training round-to-accuracy by 12%–20% compared to the Federated Averaging (FedAvg) with random reshuffling method under unique server. Furthermore, FedAdaSS effectively mitigates performance loss caused by low client engagement, reducing the loss indicator by 50%.

Keywords


Cite This Article

APA Style
Xu, Y., Zhao, B., Zhou, H., Su, J. (2024). Fedadass: federated learning with adaptive parameter server selection based on elastic cloud resources. Computer Modeling in Engineering & Sciences, 141(1), 609-629. https://doi.org/10.32604/cmes.2024.053462
Vancouver Style
Xu Y, Zhao B, Zhou H, Su J. Fedadass: federated learning with adaptive parameter server selection based on elastic cloud resources. Comput Model Eng Sci. 2024;141(1):609-629 https://doi.org/10.32604/cmes.2024.053462
IEEE Style
Y. Xu, B. Zhao, H. Zhou, and J. Su, “FedAdaSS: Federated Learning with Adaptive Parameter Server Selection Based on Elastic Cloud Resources,” Comput. Model. Eng. Sci., vol. 141, no. 1, pp. 609-629, 2024. https://doi.org/10.32604/cmes.2024.053462



cc Copyright © 2024 The Author(s). Published by Tech Science Press.
This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • 490

    View

  • 198

    Download

  • 0

    Like

Share Link