Efficient and generalizable neural architecture search for the real world실세계를 위한 효율적이고 일반화 가능한 신경망 아키텍쳐 탐색

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 3
  • Download : 0
DC FieldValueLanguage
dc.contributor.advisor황성주-
dc.contributor.authorLee, Hayeon-
dc.contributor.author이하연-
dc.date.accessioned2024-07-26T19:31:01Z-
dc.date.available2024-07-26T19:31:01Z-
dc.date.issued2023-
dc.identifier.urihttp://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=1047413&flag=dissertationen_US
dc.identifier.urihttp://hdl.handle.net/10203/320986-
dc.description학위논문(박사) - 한국과학기술원 : 전산학부, 2023.8,[viii, 104 p. :]-
dc.description.abstractNeural Architecture Search (NAS) has become a powerful technique for automating the design of neural architectures. However, existing NAS approaches require a significant amount of time to explore and learn a lot number of architectures that are not suitable for a given task. Furthermore, they lack generalization across different tasks and often suffer from leveraging useful knowledge learned from previous NAS tasks. To address these limitations, this study proposes an efficient and generalizable neural architecture search framework for real-world applications. The focus of this research is on designing performance predictors that can rapidly predict the target performance accurately by transferring knowledge learned from previous NAS tasks, while recognizing the dataset, hardware, and knowledge-distillation settings. Moreover, this study introduces neural architecture generation models that can generate task-specific optimized architectures for various tasks through dataset embeddings or guidance from the performance predictor. The proposed generation models efficiently generate task-specific optimized neural architectures by leveraging prior knowledge learned from previous NAS tasks or the distribution of neural architectures. Extensive experiments have been conducted in various domains, including computer vision, natural language datasets, and hardware devices, to validate the performance of this framework. The proposed approach significantly improves architecture search performance compared to previous NAS methods while greatly reducing computational costs.-
dc.languageeng-
dc.publisher한국과학기술원-
dc.subject뉴럴 아키텍쳐 탐색▼a자동화된 기계학습▼a메타 학습-
dc.subjectNeural architecture search▼aNAS▼aAutoML▼aMeta-learning-
dc.titleEfficient and generalizable neural architecture search for the real world-
dc.title.alternative실세계를 위한 효율적이고 일반화 가능한 신경망 아키텍쳐 탐색-
dc.typeThesis(Ph.D)-
dc.identifier.CNRN325007-
dc.description.department한국과학기술원 :전산학부,-
dc.contributor.alternativeauthorHwang, Sung Ju-
Appears in Collection
CS-Theses_Ph.D.(박사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0