Latent Question Interpretation Through Variational Adaptation

Cited 4 time in webofscience Cited 3 time in scopus
  • Hit : 742
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorParshakova, Tetianako
dc.contributor.authorRameau, Francoisko
dc.contributor.authorSerdega, Andriyko
dc.contributor.authorKweon, In-Soko
dc.contributor.authorKim, Dae-Shikko
dc.date.accessioned2019-08-27T08:20:03Z-
dc.date.available2019-08-27T08:20:03Z-
dc.date.created2019-08-26-
dc.date.created2019-08-26-
dc.date.created2019-08-26-
dc.date.created2019-08-26-
dc.date.created2019-08-26-
dc.date.created2019-08-26-
dc.date.issued2019-11-
dc.identifier.citationIEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, v.27, no.11, pp.1713 - 1724-
dc.identifier.issn2329-9290-
dc.identifier.urihttp://hdl.handle.net/10203/265541-
dc.description.abstractMost artificial neural network models for question-answering rely on complex attention mechanisms. These techniques demonstrate high performance on existing datasets; however, they are limited in their ability to capture natural language variability, and to generate diverse relevant answers. To address this limitation, we propose a model that learns multiple interpretations of a given question. This diversity is ensured by our interpretation policy module which automatically adapts the parameters of a question-answering model with respect to a discrete latent variable. This variable follows the distribution of interpretations learned by the interpretation policy through a semi-supervised variational inference framework. To boost the performance further, the resulting policy is fine-tuned using the rewards from the answer accuracy with a policy gradient. We demonstrate the relevance and efficiency of our model through a large panel of experiments. Qualitative results, in particular, underline the ability of the proposed architecture to discover multiple interpretations of a question. When tested using the Stanford Question Answering Dataset 1.1, our model outperforms the baseline methods in finding multiple and diverse answers. To assess our strategy from a human standpoint, we also conduct a large-scale user study. This study highlights the ability of our network to produce diverse and coherent answers compared to existing approaches. Our Pytorch implementation is available as open source.(1)-
dc.languageEnglish-
dc.publisherIEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC-
dc.titleLatent Question Interpretation Through Variational Adaptation-
dc.typeArticle-
dc.identifier.wosid000480309600005-
dc.identifier.scopusid2-s2.0-85070478937-
dc.type.rimsART-
dc.citation.volume27-
dc.citation.issue11-
dc.citation.beginningpage1713-
dc.citation.endingpage1724-
dc.citation.publicationnameIEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING-
dc.identifier.doi10.1109/TASLP.2019.2929647-
dc.contributor.localauthorKweon, In-So-
dc.contributor.localauthorKim, Dae-Shik-
dc.contributor.nonIdAuthorParshakova, Tetiana-
dc.contributor.nonIdAuthorSerdega, Andriy-
dc.description.isOpenAccessN-
dc.type.journalArticleArticle-
dc.subject.keywordAuthorQuestion answering-
dc.subject.keywordAuthorneural variational inference-
dc.subject.keywordAuthorsemi-supervised learning-
dc.subject.keywordAuthorpolicy gradient-
dc.subject.keywordAuthordiscrete latent variable-
dc.subject.keywordAuthorinformation retrieval-
dc.subject.keywordAuthorneural networks-
Appears in Collection
EE-Journal Papers(저널논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 4 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0