Fast and robust distributed machine learning빠르고 강인한 분산 기계학습

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 71
  • Download : 0
DC FieldValueLanguage
dc.contributor.advisorMoon, Jaekyun-
dc.contributor.advisor문재균-
dc.contributor.authorHan, Dong-Jun-
dc.date.accessioned2023-06-23T19:33:46Z-
dc.date.available2023-06-23T19:33:46Z-
dc.date.issued2022-
dc.identifier.urihttp://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=996268&flag=dissertationen_US
dc.identifier.urihttp://hdl.handle.net/10203/309110-
dc.description학위논문(박사) - 한국과학기술원 : 전기및전자공학부, 2022.2,[vii, 97 p. :]-
dc.description.abstractWith the increasing number of edge nodes such as mobile phones, Internet of Things (IoT) devices and smart vehicles/drones, large amounts of valuable computing resources are now located at the edge. Moreover, a large portion of data generated nowadays is collected at these distributed nodes at the edge. In this sense, distributed machine learning (ML) is becoming significantly important and receiving considerable attention nowadays. This thesis proposes fast, robust and communication-computation efficient distributed machine learning (ML) solutions that are becoming increasingly important in current edge networks with a number of distributed nodes. In the first part of the thesis (Chapters 2 and 3), we focus on distributed learning which aims to speed up training via data parallelization. By utilizing the tools of information theory and coding theory, we develop a fast and robust distributed learning solution highly tailored to the practical communication networks. In the second part (Chapter 4), we focus on federated learning, where the goal is to collaboratively train a model without directly uploading the local/private data of each node to the server. We propose an algorithm that can speed up federated learning using multiple edge servers in a wireless setup. Finally in the third part (Chapter 5), we focus on split learning, which enables to reduce the computation burden at the clients by splitting the model architecture (i.e., the neural network) into the client-side model and the server-side model. We propose an algorithm that can address the latency and communication efficiency issues of current federated/split learning approaches, via a local-loss-based training method specifically geared to the split learning setup.-
dc.languageeng-
dc.publisher한국과학기술원-
dc.titleFast and robust distributed machine learning-
dc.title.alternative빠르고 강인한 분산 기계학습-
dc.typeThesis(Ph.D)-
dc.identifier.CNRN325007-
dc.description.department한국과학기술원 :전기및전자공학부,-
dc.contributor.alternativeauthor한동준-
Appears in Collection
EE-Theses_Ph.D.(박사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0