DC Field | Value | Language |
---|---|---|
dc.contributor.author | Ramnath, Kiran | ko |
dc.contributor.author | Sari, Leda | ko |
dc.contributor.author | Hasegawa-Johnson, Mark | ko |
dc.contributor.author | Yoo, Chang-Dong | ko |
dc.date.accessioned | 2022-11-14T07:00:47Z | - |
dc.date.available | 2022-11-14T07:00:47Z | - |
dc.date.created | 2022-06-25 | - |
dc.date.created | 2022-06-25 | - |
dc.date.created | 2022-06-25 | - |
dc.date.issued | 2021-06-07 | - |
dc.identifier.citation | 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2021 | - |
dc.identifier.uri | http://hdl.handle.net/10203/299595 | - |
dc.description.abstract | Although Question-Answering has long been of research interest, its accessibility to users through a speech interface and its support to multiple languages have not been addressed in prior studies. Towards these ends, we present a new task and a synthetically-generated dataset to do Fact-based Visual Spoken-Question Answering (FVSQA). FVSQA is based on the FVQA dataset, which requires a system to retrieve an entity from Knowledge Graphs (KGs) to answer a question about an image. In FVSQA, the question is spoken rather than typed. Three sub-tasks are proposed: (1) speech-to-text based, (2) end-to-end, without speech-to-text as an intermediate component, and (3) cross-lingual, in which the question is spoken in a language different from that in which the KG is recorded. The end-to-end and cross-lingual tasks are the first to require world knowledge from a multi-relational KG as a differentiable layer in an end-to-end spoken language understanding task, hence the proposed reference implementation is called Worldly-Wise (WoW). WoW is shown to perform end-to-end cross-lingual FVSQA at same levels of accuracy across 3 languages - English, Hindi, and Turkish. | - |
dc.language | English | - |
dc.publisher | Association for Computational Linguistics | - |
dc.title | Worldly Wise (WoW) - Cross-Lingual Knowledge Fusion for Fact-based Visual Spoken-Question Answering | - |
dc.type | Conference | - |
dc.identifier.wosid | 000895685602004 | - |
dc.type.rims | CONF | - |
dc.citation.publicationname | 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2021 | - |
dc.identifier.conferencecountry | US | - |
dc.identifier.conferencelocation | Online | - |
dc.identifier.doi | 10.18653/v1/2021.naacl-main.153 | - |
dc.contributor.localauthor | Yoo, Chang-Dong | - |
dc.contributor.nonIdAuthor | Ramnath, Kiran | - |
dc.contributor.nonIdAuthor | Sari, Leda | - |
dc.contributor.nonIdAuthor | Hasegawa-Johnson, Mark | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.