Implementation of Multimodal Biometric Recognition via Multi-feature Deep Learning Networks and Feature Fusion

Cited 18 time in webofscience Cited 15 time in scopus
  • Hit : 618
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorTiong, Leslie Ching Owko
dc.contributor.authorKim, Seong Taeko
dc.contributor.authorRo, Yong Manko
dc.date.accessioned2019-08-26T08:20:14Z-
dc.date.available2019-08-26T08:20:14Z-
dc.date.created2019-04-07-
dc.date.created2019-04-07-
dc.date.created2019-04-07-
dc.date.created2019-04-07-
dc.date.issued2019-08-
dc.identifier.citationMULTIMEDIA TOOLS AND APPLICATIONS, v.78, no.16, pp.22743 - 22772-
dc.identifier.issn1380-7501-
dc.identifier.urihttp://hdl.handle.net/10203/265536-
dc.description.abstractAlthough there is an abundance of current research on facial recognition, it still faces significant challenges that are related to variations in factors such as aging, poses, occlusions, resolution, and appearances. In this paper, we propose a Multi-feature Deep Learning Network (MDLN) architecture that uses modalities from the facial and periocular regions, with the addition of texture descriptors to improve recognition performance. Specifically, MDLN is designed as a feature-level fusion approach that correlates between the multimodal biometrics data and texture descriptor, which creates a new feature representation. Therefore, the proposed MLDN model provides more information via the feature representation to achieve better performance, while overcoming the limitations that persist in existing unimodal deep learning approaches. The proposed model has been evaluated on several public datasets and through our experiments, we proved that our proposed MDLN has improved biometric recognition performances under challenging conditions, including variations in illumination, appearances, and pose misalignments.-
dc.languageEnglish-
dc.publisherSPRINGER-
dc.titleImplementation of Multimodal Biometric Recognition via Multi-feature Deep Learning Networks and Feature Fusion-
dc.typeArticle-
dc.identifier.wosid000479055400026-
dc.identifier.scopusid2-s2.0-85064832552-
dc.type.rimsART-
dc.citation.volume78-
dc.citation.issue16-
dc.citation.beginningpage22743-
dc.citation.endingpage22772-
dc.citation.publicationnameMULTIMEDIA TOOLS AND APPLICATIONS-
dc.identifier.doi10.1007/s11042-019-7618-0-
dc.contributor.localauthorRo, Yong Man-
dc.contributor.nonIdAuthorTiong, Leslie Ching Ow-
dc.description.isOpenAccessN-
dc.type.journalArticleArticle-
dc.subject.keywordAuthorDeep multimodal learning-
dc.subject.keywordAuthorMultimodal biometric recognition-
dc.subject.keywordAuthorMulti-feature fusion layers-
dc.subject.keywordAuthorTexture descriptor representations-
dc.subject.keywordPlusFACE RECOGNITION-
Appears in Collection
EE-Journal Papers(저널논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 18 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0