Rotated word vector representations and their interpretability

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 53
  • Download : 0
Vector representation of words improves performance in various NLP tasks, but the high-dimensional word vectors are very difficult to interpret. We apply several rotation algorithms to the vector representation of words to improve the interpretability. Unlike previous approaches that induce sparsity, the rotated vectors are interpretable while preserving the expressive performance of the original vectors. Furthermore, any pre-built word vector representation can be rotated for improved interpretability. We apply rotation to skip-grams and glove and compare the expressive power and interpretability with the original vectors and the sparse overcomplete vectors. The results show that the rotated vectors outperform the original and the sparse overcomplete vectors for interpretability and expressiveness tasks.
Publisher
Association for Computational Linguistics (ACL)
Issue Date
2017-09
Language
English
Citation

2017 Conference on Empirical Methods in Natural Language Processing, EMNLP 2017, pp.401 - 411

DOI
10.18653/v1/d17-1041
URI
http://hdl.handle.net/10203/311551
Appears in Collection
CS-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0