DC Field | Value | Language |
---|---|---|
dc.contributor.advisor | Myaeng, Sung Hyon | - |
dc.contributor.advisor | 맹성현 | - |
dc.contributor.author | Park, Joo Hee | - |
dc.date.accessioned | 2018-06-20T06:24:16Z | - |
dc.date.available | 2018-06-20T06:24:16Z | - |
dc.date.issued | 2017 | - |
dc.identifier.uri | http://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=718721&flag=dissertation | en_US |
dc.identifier.uri | http://hdl.handle.net/10203/243447 | - |
dc.description | 학위논문(석사) - 한국과학기술원 : 전산학부, 2017.8,[iv, 32 p. :] | - |
dc.description.abstract | A distributed representation has become a popular approach to capturing a word meaning. However, while humans tend to use more diverse sources such as perception, cognition, and emotion in developing a word meaning, it is hardly captured by current amodal or bimodal approaches. In this paper, we propose polymodal embeddings for such multifaceted nature of word meanings. In particular, we attempt to integrate several aspects of word meanings using additional data based on the human cognition model. We show that the proposed method outperforms the state-of-the-art baselines in word similarity task and hypernym prediction task. Also, we investigate which aspects of word meanings are not sufficiently reflected by the embedding and suggest a solution. Finally, we computationally show the different characteristics of concrete words and abstract words and the difference between the word similarity and the word relatedness. | - |
dc.language | eng | - |
dc.publisher | 한국과학기술원 | - |
dc.subject | Polymodality▼aWord meaning▼aWord characteristic▼aDistributed representation▼aHuman cognition model | - |
dc.subject | 다형성▼a단어 의미▼a단어 특성▼a분산 표상▼a인지 모델 | - |
dc.title | Polymodal embeddings for the multifaceted nature of word meanings | - |
dc.title.alternative | 단어 의미의 다면성을 고려한 폴리모달 임베딩 | - |
dc.type | Thesis(Master) | - |
dc.identifier.CNRN | 325007 | - |
dc.description.department | 한국과학기술원 :전산학부, | - |
dc.contributor.alternativeauthor | 박주희 | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.