Home / Advanced Search

  • Title/Keywords

  • Author/Affliations

  • Journal

  • Article Type

  • Start Year

  • End Year

Update SearchingClear
  • Articles
  • Online
Search Results (2)
  • Open Access

    ARTICLE

    CASBA: Capability-Adaptive Shadow Backdoor Attack against Federated Learning

    Hongwei Wu*, Guojian Li, Hanyun Zhang, Zi Ye, Chao Ma

    CMC-Computers, Materials & Continua, Vol.86, No.3, 2026, DOI:10.32604/cmc.2025.071008 - 12 January 2026

    Abstract Federated Learning (FL) protects data privacy through a distributed training mechanism, yet its decentralized nature also introduces new security vulnerabilities. Backdoor attacks inject malicious triggers into the global model through compromised updates, posing significant threats to model integrity and becoming a key focus in FL security. Existing backdoor attack methods typically embed triggers directly into original images and consider only data heterogeneity, resulting in limited stealth and adaptability. To address the heterogeneity of malicious client devices, this paper proposes a novel backdoor attack method named Capability-Adaptive Shadow Backdoor Attack (CASBA). By incorporating measurements of clients’… More >

  • Open Access

    ARTICLE

    FedTC: A Personalized Federated Learning Method with Two Classifiers

    Yang Liu1,3, Jiabo Wang1,2,*, Qinbo Liu1, Mehdi Gheisari1, Wanyin Xu1, Zoe L. Jiang1, Jiajia Zhang1,*

    CMC-Computers, Materials & Continua, Vol.76, No.3, pp. 3013-3027, 2023, DOI:10.32604/cmc.2023.039452 - 08 October 2023

    Abstract Centralized training of deep learning models poses privacy risks that hinder their deployment. Federated learning (FL) has emerged as a solution to address these risks, allowing multiple clients to train deep learning models collaboratively without sharing raw data. However, FL is vulnerable to the impact of heterogeneous distributed data, which weakens convergence stability and suboptimal performance of the trained model on local data. This is due to the discarding of the old local model at each round of training, which results in the loss of personalized information in the model critical for maintaining model accuracy… More >

Displaying 1-10 on page 1 of 2. Per Page