DC Field | Value | Language |
---|---|---|
dc.contributor.advisor | 노준용 | - |
dc.contributor.author | Seo, Chang Wook | - |
dc.contributor.author | 서창욱 | - |
dc.date.accessioned | 2024-08-08T19:31:01Z | - |
dc.date.available | 2024-08-08T19:31:01Z | - |
dc.date.issued | 2024 | - |
dc.identifier.uri | http://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=1098150&flag=dissertation | en_US |
dc.identifier.uri | http://hdl.handle.net/10203/321995 | - |
dc.description | 학위논문(박사) - 한국과학기술원 : 문화기술대학원, 2024.2,[v, 51 p. :] | - |
dc.description.abstract | therefore, it is important to consider their unique styles when extracting sketches from color images for various applications. Unfortunately, most existing sketch extraction methods are designed to extract sketches of a single style. Although there have been some attempts to generate various style sketches, the methods generally suffer from two limitations: low quality results and difficulty in training the model due to the requirement of a paired dataset. In this paper, we propose a novel multi-modal sketch extraction method that can imitate the style of a given reference sketch with unpaired data training in a semi-supervised manner. Our method outperforms state-of-the-art sketch extraction methods and unpaired image translation methods in both quantitative and qualitative evaluations. | - |
dc.description.abstract | Sketches reflect the drawing style of individual artists | - |
dc.language | eng | - |
dc.publisher | 한국과학기술원 | - |
dc.subject | 인공지능▼a딥러닝▼a생성모델▼a대비학습▼a이미지처리 | - |
dc.subject | Artificial intelligence▼aDeep learning▼aGenerative model▼aContrastive learning▼aImage processing | - |
dc.title | Deep-learning based sketch extraction techniques using attention mechanism and contrastive learning framework | - |
dc.title.alternative | 어텐션 및 대비학습 프레임워크를 활용한 딥러닝 기반의 스케치 생성 기술 | - |
dc.type | Thesis(Ph.D) | - |
dc.identifier.CNRN | 325007 | - |
dc.description.department | 한국과학기술원 :문화기술대학원, | - |
dc.contributor.alternativeauthor | Noh, Junyong | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.