Open Access iconOpen Access

ARTICLE

A Federated Learning Incentive Mechanism for Dynamic Client Participation: Unbiased Deep Learning Models

Jianfeng Lu1, Tao Huang1, Yuanai Xie2,*, Shuqin Cao1, Bing Li3

1 School of Computer Science and Technology, Wuhan University of Science and Technology, Wuhan, 430065, China
2 College of Computer Science, South-Central Minzu University, Wuhan, 430074, China
3 School of Computer Science and Technology, Zhejiang Normal University, Jinhua, 321004, China

* Corresponding Author: Yuanai Xie. Email: email

(This article belongs to the Special Issue: The Next-generation Deep Learning Approaches to Emerging Real-world Applications)

Computers, Materials & Continua 2025, 83(1), 619-634. https://doi.org/10.32604/cmc.2025.060094

Abstract

The proliferation of deep learning (DL) has amplified the demand for processing large and complex datasets for tasks such as modeling, classification, and identification. However, traditional DL methods compromise client privacy by collecting sensitive data, underscoring the necessity for privacy-preserving solutions like Federated Learning (FL). FL effectively addresses escalating privacy concerns by facilitating collaborative model training without necessitating the sharing of raw data. Given that FL clients autonomously manage training data, encouraging client engagement is pivotal for successful model training. To overcome challenges like unreliable communication and budget constraints, we present ENTIRE, a contract-based dynamic participation incentive mechanism for FL. ENTIRE ensures impartial model training by tailoring participation levels and payments to accommodate diverse client preferences. Our approach involves several key steps. Initially, we examine how random client participation impacts FL convergence in non-convex scenarios, establishing the correlation between client participation levels and model performance. Subsequently, we reframe model performance optimization as an optimal contract design challenge to guide the distribution of rewards among clients with varying participation costs. By balancing budget considerations with model effectiveness, we craft optimal contracts for different budgetary constraints, prompting clients to disclose their participation preferences and select suitable contracts for contributing to model training. Finally, we conduct a comprehensive experimental evaluation of ENTIRE using three real datasets. The results demonstrate a significant 12.9% enhancement in model performance, validating its adherence to anticipated economic properties.

Keywords

Federated learning; deep learning; non-IID data; dynamic client participation; non-convex optimization; contract

Cite This Article

APA Style
Lu, J., Huang, T., Xie, Y., Cao, S., Li, B. (2025). A federated learning incentive mechanism for dynamic client participation: unbiased deep learning models. Computers, Materials & Continua, 83(1), 619–634. https://doi.org/10.32604/cmc.2025.060094
Vancouver Style
Lu J, Huang T, Xie Y, Cao S, Li B. A federated learning incentive mechanism for dynamic client participation: unbiased deep learning models. Comput Mater Contin. 2025;83(1):619–634. https://doi.org/10.32604/cmc.2025.060094
IEEE Style
J. Lu, T. Huang, Y. Xie, S. Cao, and B. Li, “A Federated Learning Incentive Mechanism for Dynamic Client Participation: Unbiased Deep Learning Models,” Comput. Mater. Contin., vol. 83, no. 1, pp. 619–634, 2025. https://doi.org/10.32604/cmc.2025.060094



cc Copyright © 2025 The Author(s). Published by Tech Science Press.
This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • 433

    View

  • 152

    Download

  • 0

    Like

Share Link