Learning to Associate Every Segment for Video Panoptic Segmentation

Cited 13 time in webofscience Cited 0 time in scopus
  • Hit : 51
  • Download : 0
Temporal correspondence - linking pixels or objects across frames - is a fundamental supervisory signal for the video models. For the panoptic understanding of dynamic scenes, we further extend this concept to every segment. Specifically, we aim to learn coarse segment-level matching and fine pixel-level matching together. We implement this idea by designing two novel learning objectives. To validate our proposals, we adopt a deep siamese model and train the model to learn the temporal correspondence on two different levels (i.e., segment and pixel) along with the target task. At inference time, the model processes each frame independently without any extra computation and post-processing. We show that our per-frame inference model can achieve new state-of-the-art results on Cityscapes-VPS and VIPER datasets. Moreover, due to its high efficiency, the model runs in a fraction of time (3×) compared to the previous state-of-the-art approach.
Publisher
IEEE
Issue Date
2021-06
Language
English
Citation

2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp.2704 - 2713

ISSN
1063-6919
DOI
10.1109/cvpr46437.2021.00273
URI
http://hdl.handle.net/10203/312228
Appears in Collection
EE-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 13 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0