DC Field | Value | Language |
---|---|---|
dc.contributor.author | Cho, Gyusang | ko |
dc.contributor.author | Youn, Chan-Hyun | ko |
dc.date.accessioned | 2023-09-15T06:00:27Z | - |
dc.date.available | 2023-09-15T06:00:27Z | - |
dc.date.created | 2023-09-15 | - |
dc.date.issued | 2022-10 | - |
dc.identifier.citation | 13th International Conference on Information and Communication Technology Convergence, ICTC 2022, pp.467 - 470 | - |
dc.identifier.uri | http://hdl.handle.net/10203/312666 | - |
dc.description.abstract | Fully-supervised learning and self-supervised learning are two standard learning frameworks for training visual representations. While the superiority and inferiority of the two frameworks are not obscured when pre-training is performed, this paper aims to compare the transferability performance for the hand posture estimation task. We conduct the experiment on a supervised pre-trained model and 5 self-supervised pre-trained models. To this end, we conclude that self-supervised pre-trained models do not necessarily outperform their supervised pre-trained counterparts, while self-supervised pre-trained models lead to faster convergence of the neural network. | - |
dc.language | English | - |
dc.publisher | IEEE Computer Society | - |
dc.title | Supervised vs. Self-supervised Pre-trained models for Hand Pose Estimation | - |
dc.type | Conference | - |
dc.identifier.scopusid | 2-s2.0-85143256808 | - |
dc.type.rims | CONF | - |
dc.citation.beginningpage | 467 | - |
dc.citation.endingpage | 470 | - |
dc.citation.publicationname | 13th International Conference on Information and Communication Technology Convergence, ICTC 2022 | - |
dc.identifier.conferencecountry | KO | - |
dc.identifier.conferencelocation | Jeju Island | - |
dc.identifier.doi | 10.1109/ICTC55196.2022.9953011 | - |
dc.contributor.localauthor | Youn, Chan-Hyun | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.