Optimization and learning of graphical models그래프 모형 최적화와 그래프 모형 학습 알고리즘 : 확률론적 근사법에 따른 접근

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 975
  • Download : 0
DC FieldValueLanguage
dc.contributor.advisorYi, Yung-
dc.contributor.advisor이융-
dc.contributor.advisorShin, Jinwoo-
dc.contributor.advisor신진우-
dc.contributor.authorJang, Hyeryung-
dc.contributor.author장혜령-
dc.date.accessioned2018-05-23T19:37:32Z-
dc.date.available2018-05-23T19:37:32Z-
dc.date.issued2017-
dc.identifier.urihttp://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=675823&flag=dissertationen_US
dc.identifier.urihttp://hdl.handle.net/10203/242025-
dc.description학위논문(박사) - 한국과학기술원 : 전기및전자공학부, 2017.2,[v, 89 p. :]-
dc.description.abstractThis thesis studies the problem of optimization and learning in graphical models via stochastic approximation theory. First, in various multi-agent networked environments, the system can benefit from coordinating actions of two interacting agents at some cost of coordination, where a primary goal is to develop a distributed algorithm maximizing the coordination effect. Such pairwise coordinations and nodewise costs in the network can be captured by graphical model framework, which becomes the problem of finding the optimal graph parameter. We propose various distributed algorithms that require only one-hop message passing, which can be interpreted based on either Lagrange duality theory or game theory framework. Our algorithms are motivated by a stochastic approximation method that runs a Markov chain incompletely over time, but provably guarantees the convergence to the optimal solution. In machine learning field, for the problem of parameter learning in graphical models having latent variables, the standard approach, i.e., EM algorithm, is computationally intractable for high dimensional graphs in both E and M steps. Since the substitution of one step to a faster surrogate for combating against intractability can often cause failure in convergence, we propose a new learning algorithm which is computationally efficient and provably ensures convergence to a correct optimum via multi-time-scale stochastic approximation theory, where its key idea is to run only a few cycles of Markov chain in both steps. We demonstrate our theoretical findings through extensive simulations with synthetic data and/or real-world datasets.-
dc.languageeng-
dc.publisher한국과학기술원-
dc.subjectGraphical models-
dc.subjectDistributed scheme-
dc.subjectParameter learning-
dc.subjectStochastic approximation theory-
dc.subjectOptimization theory-
dc.subject그래프 모형-
dc.subject분산 알고리즘-
dc.subject파라미터 학습-
dc.subject확률론적 근사법-
dc.subject최적화 이론-
dc.titleOptimization and learning of graphical models-
dc.title.alternative그래프 모형 최적화와 그래프 모형 학습 알고리즘 : 확률론적 근사법에 따른 접근-
dc.typeThesis(Ph.D)-
dc.identifier.CNRN325007-
dc.description.department한국과학기술원 :전기및전자공학부,-
Appears in Collection
EE-Theses_Ph.D.(박사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0