DC Field | Value | Language |
---|---|---|
dc.contributor.advisor | Yi, Yung | - |
dc.contributor.advisor | 이융 | - |
dc.contributor.advisor | Shin, Jinwoo | - |
dc.contributor.advisor | 신진우 | - |
dc.contributor.author | Jang, Hyeryung | - |
dc.contributor.author | 장혜령 | - |
dc.date.accessioned | 2018-05-23T19:37:32Z | - |
dc.date.available | 2018-05-23T19:37:32Z | - |
dc.date.issued | 2017 | - |
dc.identifier.uri | http://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=675823&flag=dissertation | en_US |
dc.identifier.uri | http://hdl.handle.net/10203/242025 | - |
dc.description | 학위논문(박사) - 한국과학기술원 : 전기및전자공학부, 2017.2,[v, 89 p. :] | - |
dc.description.abstract | This thesis studies the problem of optimization and learning in graphical models via stochastic approximation theory. First, in various multi-agent networked environments, the system can benefit from coordinating actions of two interacting agents at some cost of coordination, where a primary goal is to develop a distributed algorithm maximizing the coordination effect. Such pairwise coordinations and nodewise costs in the network can be captured by graphical model framework, which becomes the problem of finding the optimal graph parameter. We propose various distributed algorithms that require only one-hop message passing, which can be interpreted based on either Lagrange duality theory or game theory framework. Our algorithms are motivated by a stochastic approximation method that runs a Markov chain incompletely over time, but provably guarantees the convergence to the optimal solution. In machine learning field, for the problem of parameter learning in graphical models having latent variables, the standard approach, i.e., EM algorithm, is computationally intractable for high dimensional graphs in both E and M steps. Since the substitution of one step to a faster surrogate for combating against intractability can often cause failure in convergence, we propose a new learning algorithm which is computationally efficient and provably ensures convergence to a correct optimum via multi-time-scale stochastic approximation theory, where its key idea is to run only a few cycles of Markov chain in both steps. We demonstrate our theoretical findings through extensive simulations with synthetic data and/or real-world datasets. | - |
dc.language | eng | - |
dc.publisher | 한국과학기술원 | - |
dc.subject | Graphical models | - |
dc.subject | Distributed scheme | - |
dc.subject | Parameter learning | - |
dc.subject | Stochastic approximation theory | - |
dc.subject | Optimization theory | - |
dc.subject | 그래프 모형 | - |
dc.subject | 분산 알고리즘 | - |
dc.subject | 파라미터 학습 | - |
dc.subject | 확률론적 근사법 | - |
dc.subject | 최적화 이론 | - |
dc.title | Optimization and learning of graphical models | - |
dc.title.alternative | 그래프 모형 최적화와 그래프 모형 학습 알고리즘 : 확률론적 근사법에 따른 접근 | - |
dc.type | Thesis(Ph.D) | - |
dc.identifier.CNRN | 325007 | - |
dc.description.department | 한국과학기술원 :전기및전자공학부, | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.