Supervised vs. Self-supervised Pre-trained models for Hand Pose Estimation

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 29
  • Download : 0
Fully-supervised learning and self-supervised learning are two standard learning frameworks for training visual representations. While the superiority and inferiority of the two frameworks are not obscured when pre-training is performed, this paper aims to compare the transferability performance for the hand posture estimation task. We conduct the experiment on a supervised pre-trained model and 5 self-supervised pre-trained models. To this end, we conclude that self-supervised pre-trained models do not necessarily outperform their supervised pre-trained counterparts, while self-supervised pre-trained models lead to faster convergence of the neural network.
Publisher
IEEE Computer Society
Issue Date
2022-10
Language
English
Citation

13th International Conference on Information and Communication Technology Convergence, ICTC 2022, pp.467 - 470

DOI
10.1109/ICTC55196.2022.9953011
URI
http://hdl.handle.net/10203/312666
Appears in Collection
EE-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0