Pre-training a neural model to overcome data scarcity in relation extraction from text관계 추출에서의 데이터 부족 문제 완화를 위한 인공신경망 사전학습 방법론

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 323
  • Download : 0
DC FieldValueLanguage
dc.contributor.advisorMyaeng, Sung-Hyon-
dc.contributor.advisor맹성현-
dc.contributor.authorJung, Seokwoo-
dc.date.accessioned2019-09-04T02:46:32Z-
dc.date.available2019-09-04T02:46:32Z-
dc.date.issued2018-
dc.identifier.urihttp://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=734097&flag=dissertationen_US
dc.identifier.urihttp://hdl.handle.net/10203/267036-
dc.description학위논문(석사) - 한국과학기술원 : 전산학부, 2018.2,[iii, 28 p. :]-
dc.description.abstractData scarcity is a major stumbling block in relation extraction. We propose an unsupervised pre-training method for extracting relational information from a huge amount of unlabeled data prior to supervised learning in the situation where hard to make golden labeled data. An objective function not requiring any labeled data is used during the pre-training phase, with an attempt to predict clue words crucial for inferring semantic relation types between two entities in a given sentence. The experimental result on public datasets shows that our approach is effective in a data-scarce setting.-
dc.languageeng-
dc.publisher한국과학기술원-
dc.subjectRelation extraction▼aunsupervised pre-training▼atransfer learning▼aconvolutional neural network (CNN)▼adata security▼adependency parse tree-
dc.subject관계 추출▼a비지도 선행 학습▼a전이 학습▼a합성곱 신경망▼a데이터 부족 문제▼a의존성 파스 트리-
dc.titlePre-training a neural model to overcome data scarcity in relation extraction from text-
dc.title.alternative관계 추출에서의 데이터 부족 문제 완화를 위한 인공신경망 사전학습 방법론-
dc.typeThesis(Master)-
dc.identifier.CNRN325007-
dc.description.department한국과학기술원 :전산학부,-
dc.contributor.alternativeauthor정석우-
Appears in Collection
CS-Theses_Master(석사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0