Unsupervised test-time adaptation to overcome domain shift in the wild현실 테스트 환경에서의 도메인 변화 극복을 위한 자가 적응 학습 방법론

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 531
  • Download : 0
DC FieldValueLanguage
dc.contributor.advisorKweon, In So-
dc.contributor.advisor권인소-
dc.contributor.authorSong, Junha-
dc.date.accessioned2023-06-22T19:32:01Z-
dc.date.available2023-06-22T19:32:01Z-
dc.date.issued2023-
dc.identifier.urihttp://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=1032368&flag=dissertationen_US
dc.identifier.urihttp://hdl.handle.net/10203/308328-
dc.description학위논문(석사) - 한국과학기술원 : 미래자동차학제전공, 2023.2,[vi, 39 p. :]-
dc.description.abstractDespite recent advances in deep learning, deep neural networks often suffer from performance degradation when the train and test domains differ significantly. Among several tasks addressing such domain shifts, Test-time adaptation (TTA) has attracted significant attention due to its practical properties, which enable the adaptation of a pre-trained model to a new domain with only unlabeled target dataset during the inference stage. Prior works on TTA assume that the target dataset comes from the same distribution, and model adaptation is performed for a short period. However, TTA may be carried out under two conditions that may arise in reality: (1) the target domain contains multiple subdomains (\ie, compound domain (CD)) that are sufficiently distinctive from each other, or (2) long-term adaptation to continually changing domains is performed in edge devices that are memory constrained. In this thesis, we present methods and analysis to solve the problems that may occur in each of the above two conditions. To tackle the problem in the first condition, we assume that the compound domain might occur cyclically, such as repeated driving environments over time (\eg, daytime, twilight, and night). One naive approach is to perform domain-specific adaptation, where several models are constructed, and only test images from similar domains are used for each model adaptation. However, such a domain-specific TTA may not be feasible under the scenario that the model can not access domain labels. Hence, we propose an online clustering algorithm that allow the model to obtain pseudo-domain label and accumulate knowledge of cyclic domains. Moreover, we attempt to boost its performance with adaptation loss denoising by considering the test image-wise similarity with the source distribution. To do so, we achieve reliable adaptation performance under compound and cyclical domains. For the second condition, we present an effective and efficient approach that improves adaptation performance in continually changing target domains. We introduce novel back-to-the source regularization that helps our newly proposed meta networks regularized by the protected source knowledge distilled from frozen original networks. With negligible computational overhead, this regularization prevents error accumulation and catastrophic forgetting, resulting in stable performance even in long-term test-time adaptation. Moreover, to help TTA to be implemented in edge devices, we further present adaptive meta networks which allow original networks not to store large sizes of activation required for backpropagation, which enables our approach to perform memory efficiently. We demonstrate that our strategy outperforms other state-of-the-art methods on various benchmarks for image classification and semantic segmentation tasks.-
dc.languageeng-
dc.publisher한국과학기술원-
dc.subjectUnsupervised domain adaptation▼aTest-time adaptation▼aCompound target domain-
dc.subject도메인 적응▼a현실 환경 적응▼a다중 도메인-
dc.titleUnsupervised test-time adaptation to overcome domain shift in the wild-
dc.title.alternative현실 테스트 환경에서의 도메인 변화 극복을 위한 자가 적응 학습 방법론-
dc.typeThesis(Master)-
dc.identifier.CNRN325007-
dc.description.department한국과학기술원 :미래자동차학제전공,-
dc.contributor.alternativeauthor송준하-
Appears in Collection
PD-Theses_Master(석사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0