Subword-level Word Vector Representations for Korean

Cited 28 time in webofscience Cited 30 time in scopus
  • Hit : 215
  • Download : 0
Research on distributed word representations is focused on widely-used languages such as English. Although the same methods can be used for other languages, language-specific knowledge can enhance the accuracy and richness of word vector representations. In this paper, we look at improving distributed word representations for Korean using knowledge about the unique linguistic structure of Korean. Specifically, we decompose Korean words into the jamo level, beyond the character-level, allowing a systematic use of subword information. To evaluate the vectors, we develop Korean test sets for word similarity and analogy and make them publicly available. The results show that our simple method outperforms word2vec and character-level Skip-Grams on semantic and syntactic similarity and analogy tasks and contributes positively toward downstream NLP tasks such as sentiment analysis.
Publisher
ASSOC COMPUTATIONAL LINGUISTICS-ACL
Issue Date
2018-07
Language
English
Citation

56th Annual Meeting of the Association-for-Computational-Linguistics (ACL), pp.2429 - 2438

DOI
10.18653/v1/P18-1226
URI
http://hdl.handle.net/10203/275115
Appears in Collection
CS-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 28 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0