Towards Attack-tolerant Federated Learning via Critical Parameter Analysis

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 47
  • Download : 0
Federated learning is used to train a shared model in a decentralized way without clients sharing private data with each other. Federated learning systems are susceptible to poisoning attacks when malicious clients send false updates to the central server. Existing defense strategies are ineffective under non-IID data settings. This paper proposes a new defense strategy, FedCPA (Federated learning with Critical Parameter Analysis). Our attack-tolerant aggregation method is based on the observation that benign local models have similar sets of top-k and bottom-k critical parameters, whereas poisoned local models do not. Experiments with different attack scenarios on multiple datasets demonstrate that our model outperforms existing defense strategies in defending against poisoning attacks.
Publisher
IEEE/Computer Vision Foundation
Issue Date
2023-10-04
Language
English
Citation

International Conference on Computer Vision, ICCV 2023

URI
http://hdl.handle.net/10203/314907
Appears in Collection
CS-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0