Multi-Task Learning Using Task Dependencies for Face Attributes Prediction

Cited 1 time in webofscience Cited 1 time in scopus
  • Hit : 422
  • Download : 243
Face attributes prediction has an increasing amount of applications in human-computer interaction, face verification and video surveillance. Various studies show that dependencies exist in face attributes. Multi-task learning architecture can build a synergy among the correlated tasks by parameter sharing in the shared layers. However, the dependencies between the tasks have been ignored in the task-specific layers of most multi-task learning architectures. Thus, how to further boost the performance of individual tasks by using task dependencies among face attributes is quite challenging. In this paper, we propose a multi-task learning using task dependencies architecture for face attributes prediction and evaluate the performance with the tasks of smile and gender prediction. The designed attention modules in task-specific layers of our proposed architecture are used for learning task-dependent disentangled representations. The experimental results demonstrate the effectiveness of our proposed network by comparing with the traditional multi-task learning architecture and the state-of-the-art methods on Faces of the world (FotW) and Labeled faces in the wild-a (LFWA) datasets.
Publisher
MDPI
Issue Date
2019-06
Language
English
Article Type
Article
Citation

APPLIED SCIENCES-BASEL, v.9, no.12

ISSN
2076-3417
DOI
10.3390/app9122535
URI
http://hdl.handle.net/10203/263748
Appears in Collection
EE-Journal Papers(저널논문)
Files in This Item
000473754800144.pdf(4.57 MB)Download
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 1 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0