FEWER:Federated Weight Recovery

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 272
  • Download : 0
In federated learning, the local devices train the model with their local data, independently; and the server gathers the locally trained model to aggregate them into a shared global model. Therefore, federated learning is an approach to decouple the model training from directly assessing the local data. However, the requirement of periodic communications on model parameters results in a primary bottleneck for the efficiency of federated learning. This work proposes a novel federated learning algorithm, Federated Weight Recovery(FEWER), which enables a sparsely pruned model in the training phase. FEWER starts with the initial model training with an extremely sparse state, and FEWER gradually grows the model capacity until the model reaches a dense model at the end of the training. The level of sparsity becomes the leverage to either increasing the accuracy or decreasing the communication cost, and this sparsification can be beneficial to practitioners. Our experimental results show that FEWER achieves higher test accuracies with less communication costs for most of the test cases. © 2020 ACM.
Publisher
ACM
Issue Date
2020-12-01
Language
English
Citation

1st Workshop on Distributed Machine Learning, DistributedML 2020, co-located with the 16th International Conference on emerging Networking EXperiments and Technologies, CoNEXT 2020

DOI
10.1145/3426745.3431335
URI
http://hdl.handle.net/10203/280114
Appears in Collection
RIMS Conference PapersIE-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0