Federated-split learning framework for computationally-constrained system-heterogeneous clients제한된 계산 능력을 가지는 시스템 이질적인 기기 환경을 위한 연합-분할학습 연구

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 2
  • Download : 0
DC FieldValueLanguage
dc.contributor.advisor강준혁-
dc.contributor.authorShin, Jiyun-
dc.contributor.author신지윤-
dc.date.accessioned2024-08-08T19:30:13Z-
dc.date.available2024-08-08T19:30:13Z-
dc.date.issued2024-
dc.identifier.urihttp://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=1097292&flag=dissertationen_US
dc.identifier.urihttp://hdl.handle.net/10203/321774-
dc.description학위논문(석사) - 한국과학기술원 : 전기및전자공학부, 2024.2,[iii, 28 p. :]-
dc.description.abstractFederated learning is a machine learning technique that can overcome the privacy and limited bandwidth issues of centralized learning, where local data is sent to a server to train a model on a central server. However, in real-world scenarios, Federated learning suffers from heterogeneous clients with varying computing power. Due to heterogeneous clients, multiple global models must be created or the size of the global model must be reduced to fit the least capable client, which leads to an overall performance degradation. In particular, clients with limited capabilities face difficult to train large, computationally intensive machine learning models. To address these challenges, we propose a novel federated learning framework to tackle with system heterogeneity. To enable training of large models despite limited client computing power, the proposed framework partitions large models into client-side and server-side models, with the partitioning point being flexible to accommodate heterogeneous client capabilities. This approach enables the server’s computational power to be utilized in addition to the client’s power to learn the larger model, thereby improving the model performance. By providing flexible partitioning points for different clients, it also enables all clients to participate in learning and reduces the unnecessary use of server power. Experiments show that the proposed algorithm effectively utilizes server power and outperforms the baseline proposed algorithm.-
dc.languageeng-
dc.publisher한국과학기술원-
dc.subject연합 학습▼a분할 학습▼a모델-이질적인-
dc.subjectFederated learning▼aSplit learning▼aModel-heterogeneous-
dc.titleFederated-split learning framework for computationally-constrained system-heterogeneous clients-
dc.title.alternative제한된 계산 능력을 가지는 시스템 이질적인 기기 환경을 위한 연합-분할학습 연구-
dc.typeThesis(Master)-
dc.identifier.CNRN325007-
dc.description.department한국과학기술원 :전기및전자공학부,-
dc.contributor.alternativeauthorKang, Joonhyuk-
Appears in Collection
EE-Theses_Master(석사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0