MM-TTA: Multi-Modal Test-Time Adaptation for 3D Semantic Segmentation

Cited 10 time in webofscience Cited 0 time in scopus
  • Hit : 71
  • Download : 0
Test-time adaptation approaches have recently emerged as a practical solution for handling domain shift without access to the source domain data. In this paper, we propose and explore a new multi-modal extension of test-time adaptation for 3D semantic segmentation. We find that, directly applying existing methods usually results in performance instability at test time, because multi-modal input is not considered jointly. To design a framework that can take All advantage of multi-modality, where each modality provides regularized self-supervisory signals to other modalities, we propose two complementary modules within and across the modalities. First, Intra-modal Pseudo-label Generation (Infra-PG) is introduced to obtain reliable pseudo labels within each modality by aggregating information from two models that are both pre-trained on source data but updated with target data at different paces. Second, Inter-modal Pseudo-label Refinement (Inter-PR) adaptively selects more reliable pseudo labels from different modalities based on a proposed consistency scheme. Experiments demonstrate that our regularized pseudo labels produce stable self-learning signals in numerous multi-modal test-time adaptation scenarios for 3D semantic segmentation. Visit our project website at https: //www.nec-labs.com/mas/MM-TTA
Publisher
Computer Vision Foundation, IEEE Computer Society
Issue Date
2022-06-24
Language
English
Citation

IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2022, pp.16907 - 16916

ISSN
1063-6919
DOI
10.1109/CVPR52688.2022.01642
URI
http://hdl.handle.net/10203/299286
Appears in Collection
EE-Conference Papers(학술회의논문)ME-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 10 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0