SANVis: Visual Analytics for Understanding Self-Attention Networks

Cited 2 time in webofscience Cited 5 time in scopus
  • Hit : 244
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorPark, Cheonbokko
dc.contributor.authorNa, Inyoupko
dc.contributor.authorJo, Yongjangko
dc.contributor.authorShin, Sungbokko
dc.contributor.authorYoo, Jaehyoko
dc.contributor.authorKwon, Bum Chulko
dc.contributor.authorZhao, Jianko
dc.contributor.authorNoh, Hyungjongko
dc.contributor.authorLee, Yeonsooko
dc.contributor.authorChoo, Jaegulko
dc.date.accessioned2021-01-12T02:50:15Z-
dc.date.available2021-01-12T02:50:15Z-
dc.date.created2020-12-03-
dc.date.issued2019-10-
dc.identifier.citation2019 IEEE Visualization Conference, VIS 2019, pp.146 - 150-
dc.identifier.urihttp://hdl.handle.net/10203/279874-
dc.description.abstractAttention networks, a deep neural network architecture inspired by humans' attention mechanism, have seen significant success in image captioning, machine translation, and many other applications. Recently, they have been further evolved into an advanced approach called multi-head self-attention networks, which can encode a set of input vectors, e.g., word vectors in a sentence, into another set of vectors. Such encoding aims at simultaneously capturing diverse syntactic and semantic features within a set, each of which corresponds to a particular attention head, forming altogether multi-head attention. Meanwhile, the increased model complexity prevents users from easily understanding and manipulating the inner workings of models. To tackle the challenges, we present a visual analytics system called SANVis, which helps users understand the behaviors and the characteristics of multi-head self-attention networks. Using a state-of-the-art self-attention model called Transformer, we demonstrate usage scenarios of SANVis in machine translation tasks. Our system is available at http://short.sanvis.org.-
dc.languageEnglish-
dc.publisherInstitute of Electrical and Electronics Engineers Inc.-
dc.titleSANVis: Visual Analytics for Understanding Self-Attention Networks-
dc.typeConference-
dc.identifier.wosid000527436500030-
dc.identifier.scopusid2-s2.0-85077997739-
dc.type.rimsCONF-
dc.citation.beginningpage146-
dc.citation.endingpage150-
dc.citation.publicationname2019 IEEE Visualization Conference, VIS 2019-
dc.identifier.conferencecountryCA-
dc.identifier.conferencelocationVancouver-
dc.identifier.doi10.1109/VISUAL.2019.8933677-
dc.contributor.localauthorChoo, Jaegul-
dc.contributor.nonIdAuthorPark, Cheonbok-
dc.contributor.nonIdAuthorNa, Inyoup-
dc.contributor.nonIdAuthorJo, Yongjang-
dc.contributor.nonIdAuthorShin, Sungbok-
dc.contributor.nonIdAuthorYoo, Jaehyo-
dc.contributor.nonIdAuthorKwon, Bum Chul-
dc.contributor.nonIdAuthorZhao, Jian-
dc.contributor.nonIdAuthorNoh, Hyungjong-
dc.contributor.nonIdAuthorLee, Yeonsoo-
Appears in Collection
RIMS Conference Papers
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 2 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0