Interpretable Word Embedding Contextualization

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 30
  • Download : 0
In this paper, we propose a method of calibrating a word embedding, so that the semantic it conveys becomes more relevant to the context. Our method is novel because the output shows clearly which senses that were originally presented in a target word embedding become stronger or weaker. This is possible by utilizing the technique of using sparse coding to recover senses that comprises a word embedding.
Publisher
Association for Computational Linguistics (ACL)
Issue Date
2018-11-01
Language
English
Citation

2018 Conference on Empirical Methods in Natural Language Processing, EMNLP 2018, pp.341 - 343

URI
http://hdl.handle.net/10203/310301
Appears in Collection
CS-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0