Extracting relation triples from unseen data관계추출 모델의 일반화를 위한 학습 방법

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 134
  • Download : 0
Developing a relational extraction model from unstructured text is essential for the automation of large-scale knowledge graph maintenance. To maintain the knowledge graph up-to-date, it is required for a model to extract relational triples from sentences that might contain unseen entities. Simply fine-tuning BERT on a relational triple extraction task shows excellent performance on seen entities (entities in the training data), although it does not generalize well for new entities. We find that augmentation with noisy data helps to extract relational triples between unseen entities, while it comes at a cost of performance degradation on seen entities. Since we have two experts, one on seen entities and the other one on unseen entities, we filter predictions of each experts and union them to get the best of both experts. Experiments on two standard benchmark datasets, NYT and WebNLG, show that our model outperforms current state-of-the-art model on unseen data, along with competitive results on the original seen data.
Advisors
Yang, Eunhoresearcher양은호researcher
Description
한국과학기술원 :AI대학원,
Publisher
한국과학기술원
Issue Date
2021
Identifier
325007
Language
eng
Description

학위논문(석사) - 한국과학기술원 : AI대학원, 2021.8,[iii, 20 p. :]

Keywords

Natural language processing▼aRelation extraction▼aKnowledge graph▼aGeneralization; 자연어 처리▼a관계 추출▼a지식 그래프▼a모델 일반화

URI
http://hdl.handle.net/10203/292501
Link
http://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=964737&flag=dissertation
Appears in Collection
AI-Theses_Master(석사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0