DC Field | Value | Language |
---|---|---|
dc.contributor.advisor | Choi, Key-Sun | - |
dc.contributor.advisor | 최기선 | - |
dc.contributor.author | Yoon, Sooji | - |
dc.date.accessioned | 2021-05-11T19:34:16Z | - |
dc.date.available | 2021-05-11T19:34:16Z | - |
dc.date.issued | 2019 | - |
dc.identifier.uri | http://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=875469&flag=dissertation | en_US |
dc.identifier.uri | http://hdl.handle.net/10203/283093 | - |
dc.description | 학위논문(석사) - 한국과학기술원 : 전산학부, 2019.8,[iv, 39 p :] | - |
dc.description.abstract | Relation extraction is a work that inferring semantic relations of two entities identified in natural language text. The extracted relation is saved in the knowledge base in triple form. The knowledge base is widely used in the field of Natural Language Processing, such as Question-and-Answering systems and Information Retrieval. Therefore, research to augment Knowledge Base through various methods of relation extraction is essential. Distant Supervision data, used to train machine learning based relation extraction models, can be obtained by annotating a predefined Knowledge Base to the target corpus. With this method, we can easily obtain data, but it has the disadvantage of having many errors. In this paper, we propose an improved method in the methodology of handling error data through existing reinforcement learning based relation extractor. The existing reinforcement learning based relation extraction could not overcome the limitations of the performance of the relation extractor itself since it reflected the reward for its agent dependent on relation extractor. In this study, we propose to supplement the limits of performance of the relation extractor by adding reward independent to relation extractor to the agent. In addition, because of the characteristic of a defined knowledge base, there are cases where multiple relations are classified in distant supervision data. In this case, the agent cannot get the optimal reward at a specific state. To improve this, this study presents a way to obtain the optimal reward for each state by separating the state by relation. | - |
dc.language | eng | - |
dc.publisher | 한국과학기술원 | - |
dc.subject | Relation extraction▼adistant supervision data▼areinforcement learning | - |
dc.subject | 관계추출▼a원격지도 학습데이터▼a강화학습 | - |
dc.title | (A) study on noisy sentence classification compensating performance deviations of relation extractor | - |
dc.title.alternative | 관계추출기의 성능 편차를 보완하는 강화학습 기반의 오류 문장 분류기법에 대한 연구 | - |
dc.type | Thesis(Master) | - |
dc.identifier.CNRN | 325007 | - |
dc.description.department | 한국과학기술원 :전산학부, | - |
dc.contributor.alternativeauthor | 윤수지 | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.