Compositional Sentence Representation from Character within Large Context Text

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 256
  • Download : 0
This paper describes a Hierarchical Composition Recurrent Network (HCRN) consisting of a 3-level hierarchy of compositional models: character, word and sentence. This model is designed to overcome two problems of representing a sentence on the basis of a constituent word sequence. The first is a data sparsity problem when estimating the embedding of rare words, and the other is no usage of inter-sentence dependency. In the HCRN, word representations are built from characters, thus resolving the data-sparsity problem, and inter-sentence dependency is embedded into sentence representation at the level of sentence composition. We propose a hierarchy-wise language learning scheme in order to alleviate the optimization difficulties when training deep hierarchical recurrent networks in an end-to-end fashion. The HCRN was quantitatively and qualitatively evaluated on a dialogue act classification task. In the end, the HCRN achieved the state-of-the-art performance with a test error rate of 22.7% for dialogue act classification on the SWBD-DAMSL database.
Publisher
Chinese Academy of Sciences
Issue Date
2017-11-17
Language
English
Citation

24th International Conference on Neural Information Processing (ICONIP), pp.674 - 685

DOI
10.1007/978-3-319-70096-0_69
URI
http://hdl.handle.net/10203/227253
Appears in Collection
EE-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0