Efficient and interpretable recurrent neural network grammars효율적이고 해석가능한 문법 순환 신경망 모델

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 175
  • Download : 0
In this work, we are motivated by the hypothesis, that a linguistically informed inductive bias is beneficial to NLP models. Based on this hypothesis, we seek to improve a model with such an inductive bias: Recurrent neural network grammars (RNNG). Our primary contribution lies in Parallel RNNG, where we obtain 10 - 15 times speedups for large batch sizes. We also propose to improve RNNG with the use of attention. This makes it possible to interpret RNNG, but causes the predictive power to slightly deteriorate. Finally, we apply well-known regularisation methods from RNN to RNNG and show that they improve the predictive power of RNNG. All of our methods are evaluated against English and Chinese treebanks and compared to baselines without [strong] inductive biases.
Advisors
Oh, AliceHaeyunresearcher오혜연researcher
Description
한국과학기술원 :전산학부,
Publisher
한국과학기술원
Issue Date
2020
Identifier
325007
Language
eng
Description

학위논문(석사) - 한국과학기술원 : 전산학부, 2020.8,[v, 56 p. :]

Keywords

Natural Language Processing▼aRecurrent Neural Network Grammars▼aDeep Learning▼aLinguistically Informed Inductive Bias▼aArtificial Intelligence; 자연 언어 처리▼a문법 순환 신경망 모델▼a딥 러닝▼a언어 기반 귀납적 편향▼a인공 지능

URI
http://hdl.handle.net/10203/285008
Link
http://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=925169&flag=dissertation
Appears in Collection
CS-Theses_Master(석사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0