Efficient and interpretable recurrent neural network grammars효율적이고 해석가능한 문법 순환 신경망 모델

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 199
  • Download : 0
DC FieldValueLanguage
dc.contributor.advisorOh, AliceHaeyun-
dc.contributor.advisor오혜연-
dc.contributor.authorOehrstroem, Christoffer Nave-
dc.date.accessioned2021-05-13T19:38:29Z-
dc.date.available2021-05-13T19:38:29Z-
dc.date.issued2020-
dc.identifier.urihttp://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=925169&flag=dissertationen_US
dc.identifier.urihttp://hdl.handle.net/10203/285008-
dc.description학위논문(석사) - 한국과학기술원 : 전산학부, 2020.8,[v, 56 p. :]-
dc.description.abstractIn this work, we are motivated by the hypothesis, that a linguistically informed inductive bias is beneficial to NLP models. Based on this hypothesis, we seek to improve a model with such an inductive bias: Recurrent neural network grammars (RNNG). Our primary contribution lies in Parallel RNNG, where we obtain 10 - 15 times speedups for large batch sizes. We also propose to improve RNNG with the use of attention. This makes it possible to interpret RNNG, but causes the predictive power to slightly deteriorate. Finally, we apply well-known regularisation methods from RNN to RNNG and show that they improve the predictive power of RNNG. All of our methods are evaluated against English and Chinese treebanks and compared to baselines without [strong] inductive biases.-
dc.languageeng-
dc.publisher한국과학기술원-
dc.subjectNatural Language Processing▼aRecurrent Neural Network Grammars▼aDeep Learning▼aLinguistically Informed Inductive Bias▼aArtificial Intelligence-
dc.subject자연 언어 처리▼a문법 순환 신경망 모델▼a딥 러닝▼a언어 기반 귀납적 편향▼a인공 지능-
dc.titleEfficient and interpretable recurrent neural network grammars-
dc.title.alternative효율적이고 해석가능한 문법 순환 신경망 모델-
dc.typeThesis(Master)-
dc.identifier.CNRN325007-
dc.description.department한국과학기술원 :전산학부,-
dc.contributor.alternativeauthor오르스트롬 크리스토퍼 네버-
Appears in Collection
CS-Theses_Master(석사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0