Marginal information for structure learning구조학습을 위한 주변정보

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 253
  • Download : 0
This thesis is devoted to introduce methods for incorporating the marginal information in structure learning of graphical models. As the number of variables involved in the model increases, the performance of structure learning decreases due to the size of the large search space and the limited sample size. On the other hand, marginal models can be obtained more easily from the domain experts or the same data set because they include only some subsets of variables. This thesis propose two methods to improve the performance of the whole structure learning by using these marginal model structures In the fi rst part, we improve the score-based learning of Bayesian networks by using marginal models. structure learning for Bayesian networks has been made in a heuristic mode in search of an optimal model to avoid an explosive computational burden. In the learning process, a structural error which occurred at a point of learning may deteriorate its subsequent learning. We proposed a remedial approach to this error-for-error process by using marginal model structures. The remedy is made by xing local errors in structure in reference to the marginal structures. In this sense, we call the remedy a marginally corrective procedure. We devised a new score function for the procedure which consists of two components, the likelihood function of a model and a discrepancy measure in marginal structures. The proposed method compares favorably with a couple of the most popular algorithms as shown in experiments with benchmark data sets. In the second part of the thesis, we propose a structure learning method by combining marginal model structures. In many domains, data can be obtained from various sources and each may not contains all the variables involved in the system depending on the environment in which the data is collected. The graph constructed for each dataset can be considered as a marginal model of the entire model and they need to be combined for a joint inference. We propose methods of nding the complete set of possible combined model structures for undirected graphs. The proposed method outperforms the existing method as shown in our experiment.
Advisors
Chung, Yeonseungresearcher정연승researcherKim, Sung-horesearcher김성호researcher
Description
한국과학기술원 :수리과학과,
Publisher
한국과학기술원
Issue Date
2019
Identifier
325007
Language
eng
Description

학위논문(박사) - 한국과학기술원 : 수리과학과, 2019.8,[v, 62 p. :]

Keywords

Bayesian network▼abootstrap▼aedge-error probability▼aEM algorithm▼amarginal model▼amarkov equivalence▼ascore function▼asequential learning▼astructural error▼amarkov network▼amarkovian combination; 베이즈 네트워크▼a붓스트랩▼aEdge-오류 확률▼aEM 알고리즘▼a주변모형▼a마르코프 동치▼a점수함수▼a순차적 학습▼a구조적 오류▼a마르코프 네트워크▼a마르코프 결합

URI
http://hdl.handle.net/10203/283237
Link
http://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=871404&flag=dissertation
Appears in Collection
MA-Theses_Ph.D.(박사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0