Pre-training a neural model to overcome data scarcity in relation extraction from text관계 추출에서의 데이터 부족 문제 완화를 위한 인공신경망 사전학습 방법론

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 352
  • Download : 0
Data scarcity is a major stumbling block in relation extraction. We propose an unsupervised pre-training method for extracting relational information from a huge amount of unlabeled data prior to supervised learning in the situation where hard to make golden labeled data. An objective function not requiring any labeled data is used during the pre-training phase, with an attempt to predict clue words crucial for inferring semantic relation types between two entities in a given sentence. The experimental result on public datasets shows that our approach is effective in a data-scarce setting.
Advisors
Myaeng, Sung-Hyonresearcher맹성현researcher
Description
한국과학기술원 :전산학부,
Publisher
한국과학기술원
Issue Date
2018
Identifier
325007
Language
eng
Description

학위논문(석사) - 한국과학기술원 : 전산학부, 2018.2,[iii, 28 p. :]

Keywords

Relation extraction▼aunsupervised pre-training▼atransfer learning▼aconvolutional neural network (CNN)▼adata security▼adependency parse tree; 관계 추출▼a비지도 선행 학습▼a전이 학습▼a합성곱 신경망▼a데이터 부족 문제▼a의존성 파스 트리

URI
http://hdl.handle.net/10203/267036
Link
http://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=734097&flag=dissertation
Appears in Collection
CS-Theses_Master(석사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0