DC Field | Value | Language |
---|---|---|
dc.contributor.advisor | Kim, Junmo | - |
dc.contributor.advisor | 김준모 | - |
dc.contributor.author | Yang, Juyoung | - |
dc.date.accessioned | 2023-06-23T19:33:40Z | - |
dc.date.available | 2023-06-23T19:33:40Z | - |
dc.date.issued | 2023 | - |
dc.identifier.uri | http://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=1030552&flag=dissertation | en_US |
dc.identifier.uri | http://hdl.handle.net/10203/309094 | - |
dc.description | 학위논문(박사) - 한국과학기술원 : 전기및전자공학부, 2023.2,[iii, 44 p. :] | - |
dc.description.abstract | Following considerable development in 3D scanning technologies, many deep learning studies have recently been proposed with various approaches for 3D vision tasks. Also, in order to make practical use of these methods, the importance of a lightweight model with an efficient system is increasing. Therefore, we focus on data compression and model compression in 3D vision. In this thesis, we firstly proposed a novel framework and an effective auto-encoder architecture for data compression. Unlike existing studies that used fixed or random 2D points, our framework facilitates point cloud reconstruction by generating input-dependent point-wise features for the latent point set. Our method shows state-of-the-art performances in point cloud reconstruction and unsupervised classification, and achieves comparable performance to counterpart methods in supervised completion. For model compression, we revisit the basic concept of knowledge distillation and compare three losses derived from measures that calculate the similarity between the teacher prediction and the student prediction: Kullback-Leibler divergence (KLD), mean squared error (MSE), and cosine similarity (CS) losses. Unlike previous studies concerned with KLD and MSE losses that transfer the teacher logit values, we explored the possibility of the CS loss transferring the direction of the teacher logit. The CS loss achieved performance comparable to state-of-the-art with superior efficiency in terms of training time and the number of parameters. We finally applied knowledge distillation using the CS loss to 3D models to perform model compression. | - |
dc.language | eng | - |
dc.publisher | 한국과학기술원 | - |
dc.subject | Deep neural network▼aPoint cloud reconstruction▼aKnowledge distillation | - |
dc.subject | 깊은 신경망▼a포인트 클라우드 복원▼a지식 증류 기법 | - |
dc.title | 3D data and model compression through point cloud reconstruction and knowledge distillation | - |
dc.title.alternative | 포인트 클라우드 복원과 지식 증류를 통한 3차원 데이터 및 모델 압축 | - |
dc.type | Thesis(Ph.D) | - |
dc.identifier.CNRN | 325007 | - |
dc.description.department | 한국과학기술원 :전기및전자공학부, | - |
dc.contributor.alternativeauthor | 양주영 | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.