Worldly Wise (WoW) - Cross-Lingual Knowledge Fusion for Fact-based Visual Spoken-Question Answering

Cited 3 time in webofscience Cited 0 time in scopus
  • Hit : 73
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorRamnath, Kiranko
dc.contributor.authorSari, Ledako
dc.contributor.authorHasegawa-Johnson, Markko
dc.contributor.authorYoo, Chang-Dongko
dc.date.accessioned2022-11-14T07:00:47Z-
dc.date.available2022-11-14T07:00:47Z-
dc.date.created2022-06-25-
dc.date.created2022-06-25-
dc.date.created2022-06-25-
dc.date.issued2021-06-07-
dc.identifier.citation2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2021-
dc.identifier.urihttp://hdl.handle.net/10203/299595-
dc.description.abstractAlthough Question-Answering has long been of research interest, its accessibility to users through a speech interface and its support to multiple languages have not been addressed in prior studies. Towards these ends, we present a new task and a synthetically-generated dataset to do Fact-based Visual Spoken-Question Answering (FVSQA). FVSQA is based on the FVQA dataset, which requires a system to retrieve an entity from Knowledge Graphs (KGs) to answer a question about an image. In FVSQA, the question is spoken rather than typed. Three sub-tasks are proposed: (1) speech-to-text based, (2) end-to-end, without speech-to-text as an intermediate component, and (3) cross-lingual, in which the question is spoken in a language different from that in which the KG is recorded. The end-to-end and cross-lingual tasks are the first to require world knowledge from a multi-relational KG as a differentiable layer in an end-to-end spoken language understanding task, hence the proposed reference implementation is called Worldly-Wise (WoW). WoW is shown to perform end-to-end cross-lingual FVSQA at same levels of accuracy across 3 languages - English, Hindi, and Turkish.-
dc.languageEnglish-
dc.publisherAssociation for Computational Linguistics-
dc.titleWorldly Wise (WoW) - Cross-Lingual Knowledge Fusion for Fact-based Visual Spoken-Question Answering-
dc.typeConference-
dc.identifier.wosid000895685602004-
dc.type.rimsCONF-
dc.citation.publicationname2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2021-
dc.identifier.conferencecountryUS-
dc.identifier.conferencelocationOnline-
dc.identifier.doi10.18653/v1/2021.naacl-main.153-
dc.contributor.localauthorYoo, Chang-Dong-
dc.contributor.nonIdAuthorRamnath, Kiran-
dc.contributor.nonIdAuthorSari, Leda-
dc.contributor.nonIdAuthorHasegawa-Johnson, Mark-
Appears in Collection
EE-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 3 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0