Learning Residual Flow as Dynamic Motion from Stereo Videos

Cited 11 time in webofscience Cited 0 time in scopus
  • Hit : 49
  • Download : 0
We present a method for decomposing the 3D scene flow observed from a moving stereo rig into stationary scene elements and dynamic object motion. Our unsupervised learning framework jointly reasons about the camera motion, optical flow, and 3D motion of moving objects. Three cooperating networks predict stereo matching, camera motion, and residual flow, which represents the flow component due to object motion and not from camera motion. Based on rigid projective geometry, the estimated stereo depth is used to guide the camera motion estimation, and the depth and camera motion are used to guide the residual flow estimation. We also explicitly estimate the 3D scene flow of dynamic objects based on the residual flow and scene depth. Experiments on the KITTI dataset demonstrate the effectiveness of our approach and show that our method outperforms other state-of-the-art algorithms on the optical flow and visual odometry tasks.
Publisher
Institute of Electrical and Electronics Engineers Inc.
Issue Date
2019-11
Language
English
Citation

IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2019, pp.1180 - 1186

ISSN
2153-0858
DOI
10.1109/IROS40897.2019.8967970
URI
http://hdl.handle.net/10203/311800
Appears in Collection
EE-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 11 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0