DC Field | Value | Language |
---|---|---|
dc.contributor.advisor | Yoo, Shin | - |
dc.contributor.advisor | 유신 | - |
dc.contributor.author | Kim, Seah | - |
dc.date.accessioned | 2021-05-13T19:38:13Z | - |
dc.date.available | 2021-05-13T19:38:13Z | - |
dc.date.issued | 2020 | - |
dc.identifier.uri | http://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=925155&flag=dissertation | en_US |
dc.identifier.uri | http://hdl.handle.net/10203/284994 | - |
dc.description | 학위논문(석사) - 한국과학기술원 : 전산학부, 2020.8,[iv, 32 p. :] | - |
dc.description.abstract | With the broad and rapid adoption of Deep Neural Networks (DNNs) in various domains, an urgent need to validate their behaviour has risen, resulting in various test adequacy metrics for DNNs. One of the metrics, Surprise Adequacy (SA), aims to measure how surprising a new input is based on the similarity to the data used for training. While SA has been evaluated to be effective for image classifiers based on Convolutional Neural Networks (CNNs), it has not been studied for the Natural Language Processing (NLP) domain. This paper applies SA to NLP, in particular to three tasks: text classification, sequence labelling, and question answering task. The aim is to investigate whether SA correlates well with the correctness of the outputs. Also, SA enables prioritisation of failing inputs, thus, helps reducing the high labelling cost. An empirical evaluation shows that SA can generally work as a test adequacy metric in Natural Language Processing, especially for classification tasks. | - |
dc.language | eng | - |
dc.publisher | 한국과학기술원 | - |
dc.subject | Deep Learning▼aNatural Language Processing▼aSoftware Testing | - |
dc.subject | 딥러닝▼a자연어 처리▼a소프트웨어 테스팅 | - |
dc.title | Evaluating surprise adequacy on natural language processing | - |
dc.title.alternative | 자연어 처리의 놀라움 적합도 평가 | - |
dc.type | Thesis(Master) | - |
dc.identifier.CNRN | 325007 | - |
dc.description.department | 한국과학기술원 :전산학부, | - |
dc.contributor.alternativeauthor | 김세아 | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.