FedRN: Exploiting k-Reliable Neighbors Towards Robust Federated Learning

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 62
  • Download : 0
Robustness is becoming another important challenge of federated learning in that the data collection process in each client is naturally accompanied by noisy labels. However, it is far more complex and challenging owing to varying levels of data heterogeneity and noise over clients, which exacerbates the client-to-client performance discrepancy. In this work, we propose a robust federated learning method called FedRN, which exploits k-reliable neighbors with high data expertise or similarity. Our method helps mitigate the gap between low- and high-performance clients by training only with a selected set of clean examples, identified by a collaborative model that is built based on the reliability score over clients. We demonstrate the superiority of FedRN via extensive evaluations on three real-world or synthetic benchmark datasets. Compared with existing robust methods, the results show that FedRN significantly improves the test accuracy in the presence of noisy labels.
Publisher
ACM
Issue Date
2022-10-18
Language
English
Citation

31st ACM International Conference on Information and Knowledge Management, CIKM 2022, pp.972 - 981

DOI
10.1145/3511808.3557322
URI
http://hdl.handle.net/10203/302940
Appears in Collection
AI-Conference Papers(학술대회논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0