Two-Phase Pseudo Label Densification for Self-training based Domain Adaptation

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 140
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorShin, Inkyuko
dc.contributor.authorWoo, Sanghyunko
dc.contributor.authorPan, Feiko
dc.contributor.authorKweon, In-Soko
dc.date.accessioned2020-12-16T06:30:26Z-
dc.date.available2020-12-16T06:30:26Z-
dc.date.created2020-12-01-
dc.date.created2020-12-01-
dc.date.issued2020-08-
dc.identifier.citationEuropean Conference on Computer Vision, ECCV 2020-
dc.identifier.urihttp://hdl.handle.net/10203/278561-
dc.description.abstractRecently, deep self-training approaches emerged as a powerful solution to the unsupervised domain adaptation. The self-training scheme involves iterative processing of target data; it generates target pseudo labels and retrains the network. However, since only the confident predictions are taken as pseudo labels, existing self-training approaches inevitably produce sparse pseudo labels in practice. We see this is critical because the resulting insufficient training-signals lead to a suboptimal, error-prone model. In order to tackle this problem, we propose a novel Two-phase Pseudo Label Densification framework, referred to as TPLD. In the first phase, we use sliding window voting to propagate the confident predictions, utilizing intrinsic spatial-correlations in the images. In the second phase, we perform a confidence-based easy-hard classification. For the easy samples, we now employ their full pseudo labels. For the hard ones, we instead adopt adversarial learning to enforce hard-to-easy feature alignment. To ease the training process and avoid noisy predictions, we introduce the bootstrapping mechanism to the original self-training loss. We show the proposed TPLD can be easily integrated into existing self-training based approaches and improves the performance significantly. Combined with the recently proposed CRST self-training framework, we achieve new state-of-the-art results on two standard UDA benchmarks.-
dc.languageEnglish-
dc.publisherEuropean Conference on Computer Vision-
dc.titleTwo-Phase Pseudo Label Densification for Self-training based Domain Adaptation-
dc.typeConference-
dc.type.rimsCONF-
dc.citation.publicationnameEuropean Conference on Computer Vision, ECCV 2020-
dc.identifier.conferencecountryEI-
dc.identifier.conferencelocationVirtual-
dc.contributor.localauthorKweon, In-So-
dc.contributor.nonIdAuthorShin, Inkyu-
Appears in Collection
EE-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0