DC Field | Value | Language |
---|---|---|
dc.contributor.advisor | Oh, AliceHaeyun | - |
dc.contributor.advisor | 오혜연 | - |
dc.contributor.author | Oehrstroem, Christoffer Nave | - |
dc.date.accessioned | 2021-05-13T19:38:29Z | - |
dc.date.available | 2021-05-13T19:38:29Z | - |
dc.date.issued | 2020 | - |
dc.identifier.uri | http://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=925169&flag=dissertation | en_US |
dc.identifier.uri | http://hdl.handle.net/10203/285008 | - |
dc.description | 학위논문(석사) - 한국과학기술원 : 전산학부, 2020.8,[v, 56 p. :] | - |
dc.description.abstract | In this work, we are motivated by the hypothesis, that a linguistically informed inductive bias is beneficial to NLP models. Based on this hypothesis, we seek to improve a model with such an inductive bias: Recurrent neural network grammars (RNNG). Our primary contribution lies in Parallel RNNG, where we obtain 10 - 15 times speedups for large batch sizes. We also propose to improve RNNG with the use of attention. This makes it possible to interpret RNNG, but causes the predictive power to slightly deteriorate. Finally, we apply well-known regularisation methods from RNN to RNNG and show that they improve the predictive power of RNNG. All of our methods are evaluated against English and Chinese treebanks and compared to baselines without [strong] inductive biases. | - |
dc.language | eng | - |
dc.publisher | 한국과학기술원 | - |
dc.subject | Natural Language Processing▼aRecurrent Neural Network Grammars▼aDeep Learning▼aLinguistically Informed Inductive Bias▼aArtificial Intelligence | - |
dc.subject | 자연 언어 처리▼a문법 순환 신경망 모델▼a딥 러닝▼a언어 기반 귀납적 편향▼a인공 지능 | - |
dc.title | Efficient and interpretable recurrent neural network grammars | - |
dc.title.alternative | 효율적이고 해석가능한 문법 순환 신경망 모델 | - |
dc.type | Thesis(Master) | - |
dc.identifier.CNRN | 325007 | - |
dc.description.department | 한국과학기술원 :전산학부, | - |
dc.contributor.alternativeauthor | 오르스트롬 크리스토퍼 네버 | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.