Per-Clip Video Object Segmentation

Cited 10 time in webofscience Cited 0 time in scopus
  • Hit : 284
  • Download : 0
Recently, memory-based approaches show promising results on semi-supervised video object segmentation. These methods predict object masks frame-by-frame with the help of frequently updated memory of the previous mask. Different from this per-frame inference, we investigate an alternative perspective by treating video object segmentation as clip-wise mask propagation. In this per-clip inference scheme, we update the memory with an interval and simul-taneously process a set of consecutive frames (i.e. clip) between the memory updates. The scheme provides two potential benefits: accuracy gain by clip-level optimization and efficiency gain by parallel computation of multiple frames. To this end, we propose a new method tailored for the perclip inference. Specifically, we first introduce a clip-wise operation to refine the features based on intra-clip correlation. In addition, we employ a progressive matching mechanism for efficient information-passing within a clip. With the synergy of two modules and a newly proposed perclip based training, our network achieves state-of-the-art performance on Youtube-VOS 2018/2019 val (84.6% and 84.6%) and DAVIS 2016/2017 val (91.9% and 86.1%). Fur-thermore, our model shows a great speed-accuracy trade-off with varying memory update intervals, which leads to huge flexibility.
Publisher
Computer Vision Foundation, IEEE Computer Society
Issue Date
2022-06-24
Language
English
Citation

2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2022, pp.1342 - 1351

ISSN
1063-6919
DOI
10.1109/CVPR52688.2022.00141
URI
http://hdl.handle.net/10203/301209
Appears in Collection
EE-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 10 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0