Implementation of Multimodal Biometric Recognition via Multi-feature Deep Learning Networks and Feature Fusion

Cited 18 time in webofscience Cited 15 time in scopus
  • Hit : 617
  • Download : 0
Although there is an abundance of current research on facial recognition, it still faces significant challenges that are related to variations in factors such as aging, poses, occlusions, resolution, and appearances. In this paper, we propose a Multi-feature Deep Learning Network (MDLN) architecture that uses modalities from the facial and periocular regions, with the addition of texture descriptors to improve recognition performance. Specifically, MDLN is designed as a feature-level fusion approach that correlates between the multimodal biometrics data and texture descriptor, which creates a new feature representation. Therefore, the proposed MLDN model provides more information via the feature representation to achieve better performance, while overcoming the limitations that persist in existing unimodal deep learning approaches. The proposed model has been evaluated on several public datasets and through our experiments, we proved that our proposed MDLN has improved biometric recognition performances under challenging conditions, including variations in illumination, appearances, and pose misalignments.
Publisher
SPRINGER
Issue Date
2019-08
Language
English
Article Type
Article
Citation

MULTIMEDIA TOOLS AND APPLICATIONS, v.78, no.16, pp.22743 - 22772

ISSN
1380-7501
DOI
10.1007/s11042-019-7618-0
URI
http://hdl.handle.net/10203/265536
Appears in Collection
EE-Journal Papers(저널논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 18 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0