Learning to scale the labels for self-training based domain adaptation자기 학습 기반의 도메인 적응을 위한 레이블 확장방법론

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 268
  • Download : 0
Deep self-training approach has provided a powerful solution to the domain adaptation. The self-training scheme includes repetitive processing of target data; it generates target pseudo labels and finetunes the model with them. Since generating target pseudo labels accompany indisputable noises, existing self-training approaches normally relies on hard threshold strategy which is able to cut outnoises. However, Fixing the threshold with specific hyper-parameters inevitably produces sparse pseudo labels in practice. It is regarded as critical issue due to the fact that the insufficient training-signals from sparse pseudo labels lead to a sub-optimal, error-prone model. In this thesis, we propose method and analysis of how to soften the hard threshold, which ends upto scale the labels for effective learning in self-training based domain adaptation. Under the same goal, we assume that two different settings in domain adaptation could exist. 1) No label exists in the targetdomain. 2) Very few labels exist in the target domain. In order to tackle the problem in the first assumption, we propose a Two-phase Pseudo LabelDensification framework, referred to as TPLD. It aims to scale the initial pseudo labels via image-leveland batch-level densification method in an unsupervised manner. In particular, in the first phase, the first step uses sliding-window voting to exploit spatial correlations inherent in the image to propagate confident predictions. The second step is to perform confidence-based easy hard classification. For easy samples, now use full pseudo labels. In hard cases, we instead adopt adversarial learning to implement feature alignment. To ease the training process and avoid noisy predictions, we combine the bootstrapping mechanism with the original self-training loss. We show that the proposed TPLD can be easy integrated into existing self-training-based approaches and can significantly improve the performance. For the second assumption, our motivation starts from how to utilize very few labels for scaling the initial pseudo labels to boost up the performance. Therefore, we come to the idea that actively and adaptively utilizing small number of pixel-labels to effectively scale the pseudo-labels is critical. We propose Active label Densification via Temporal Ensembled Self-Training (Active TEST). Uncertaintymeasure that is essential to capture where to labels actively is designed via temporal ensembled manner. To adaptively and effectively scale the labels, we provide the initial class-wise threshold as class prior information to uncertainty measure. we show that it not only outperforms the previous state of the artUDA methods but also achieves the performance which is fairly close to the supervised one.
Advisors
Kweon, In Soresearcher권인소researcher
Description
한국과학기술원 :미래자동차학제전공,
Publisher
한국과학기술원
Issue Date
2021
Identifier
325007
Language
eng
Description

학위논문(석사) - 한국과학기술원 : 미래자동차학제전공, 2021.2,[v, 41 p. :]

Keywords

Domain adaptation; Self-training; Unsupervised domain adaptation; Active domain adapta-tion; 도메인 적응; 자기학습; 비지도학습 기반의 도메인적응; 엑티브 학습 기반의 도메인 적응; 컴퓨터 비전

URI
http://hdl.handle.net/10203/295128
Link
http://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=948596&flag=dissertation
Appears in Collection
PD-Theses_Master(석사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0