Encoding features robust to unseen modes of variation with attentive long short-term memory

Cited 2 time in webofscience Cited 2 time in scopus
  • Hit : 693
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorBaddar, Wissam J.ko
dc.contributor.authorRo, Yong Manko
dc.date.accessioned2020-04-22T07:20:04Z-
dc.date.available2020-04-22T07:20:04Z-
dc.date.created2019-12-13-
dc.date.created2019-12-13-
dc.date.created2019-12-13-
dc.date.created2019-12-13-
dc.date.created2019-12-13-
dc.date.created2019-12-13-
dc.date.created2019-12-13-
dc.date.issued2020-04-
dc.identifier.citationPATTERN RECOGNITION, v.100-
dc.identifier.issn0031-3203-
dc.identifier.urihttp://hdl.handle.net/10203/273988-
dc.description.abstractLong short-term memory (LSTM) is a type of recurrent neural networks that is efficient for encoding spatio-temporal features in dynamic sequences. Recent work has shown that the LSTM retains information related to the mode of variation in the input dynamic sequence which reduces the discriminability of the encoded features. To encode features robust to unseen modes of variation, we devise an LSTM adaptation named attentive mode variational LSTM. The proposed attentive mode variational LSTM utilizes the concept of attention to separate the input dynamic sequence into two parts: (1) task-relevant dynamic sequence features and (2) task-irrelevant static sequence features. The task-relevant dynamic features are used to encode and emphasize the dynamics in the input sequence. The task-irrelevant static sequence features are utilized to encode the mode of variation in the input dynamic sequence. Finally, the attentive mode variational LSTM suppresses the effect of mode variation with a shared output gate and results in a spatio-temporal feature robust to unseen variations. The effectiveness of the proposed attentive mode variational LSTM has been verified using two tasks: facial expression recognition and human action recognition. Comprehensive and extensive experiments have verified that the proposed method encodes spatio-temporal features robust to variations unseen during the training.-
dc.languageEnglish-
dc.publisherELSEVIER SCI LTD-
dc.titleEncoding features robust to unseen modes of variation with attentive long short-term memory-
dc.typeArticle-
dc.identifier.wosid000533530800048-
dc.identifier.scopusid2-s2.0-85077332798-
dc.type.rimsART-
dc.citation.volume100-
dc.citation.publicationnamePATTERN RECOGNITION-
dc.identifier.doi10.1016/j.patcog.2019.107159-
dc.contributor.localauthorRo, Yong Man-
dc.description.isOpenAccessN-
dc.type.journalArticleArticle-
dc.subject.keywordAuthorLong short-term memory-
dc.subject.keywordAuthorRecurrent neural networks-
dc.subject.keywordAuthorAttention-
dc.subject.keywordAuthorRobust features-
dc.subject.keywordAuthorModes of variation-
dc.subject.keywordAuthorFacial expression recognition-
dc.subject.keywordAuthorHuman action recognition-
Appears in Collection
EE-Journal Papers(저널논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 2 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0