Fully-supervised learning and self-supervised learning are two standard learning frameworks for training visual representations. While the superiority and inferiority of the two frameworks are not obscured when pre-training is performed, this paper aims to compare the transferability performance for the hand posture estimation task. We conduct the experiment on a supervised pre-trained model and 5 self-supervised pre-trained models. To this end, we conclude that self-supervised pre-trained models do not necessarily outperform their supervised pre-trained counterparts, while self-supervised pre-trained models lead to faster convergence of the neural network.