Loop-Net: Joint Unsupervised Disparity and Optical Flow Estimation of Stereo Videos with Spatiotemporal Loop Consistency

Cited 5 time in webofscience Cited 4 time in scopus
  • Hit : 608
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorKim, Taewooko
dc.contributor.authorRyu, Kwonyoungko
dc.contributor.authorSong, Kyeongseobko
dc.contributor.authorYoon, Kuk-Jinko
dc.date.accessioned2020-08-11T00:55:05Z-
dc.date.available2020-08-11T00:55:05Z-
dc.date.created2020-06-23-
dc.date.created2020-06-23-
dc.date.created2020-06-23-
dc.date.created2020-06-23-
dc.date.issued2020-10-
dc.identifier.citationIEEE ROBOTICS AND AUTOMATION LETTERS, v.5, no.4, pp.5597 - 5604-
dc.identifier.issn2377-3766-
dc.identifier.urihttp://hdl.handle.net/10203/275759-
dc.description.abstractMost of existing deep learning-based depth and optical flow estimation methods require the supervision of a lot of ground truth data, and hardly generalize to video frames, resulting in temporal inconsistency. In this letter, we propose a joint framework that estimates disparity and optical flow of stereo videos and generalizes across various video frames by considering the spatiotemporal relation between the disparity and flow without supervision. To improve both accuracy and consistency, we propose a loop consistency loss which enforces the spatiotemporal consistency of the estimated disparity and optical flow. Furthermore, we introduce a video-based training scheme using the c-LSTM to reinforce the temporal consistency. Extensive experiments show our proposed methods not only estimate disparity and optical flow accurately but also further improve spatiotemporal consistency. Our framework outperforms the state-of-the-art unsupervised depth and optical flow estimation models on the KITTI benchmark dataset.-
dc.languageEnglish-
dc.publisherIEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC-
dc.titleLoop-Net: Joint Unsupervised Disparity and Optical Flow Estimation of Stereo Videos with Spatiotemporal Loop Consistency-
dc.typeArticle-
dc.identifier.wosid000552945200003-
dc.identifier.scopusid2-s2.0-85089488495-
dc.type.rimsART-
dc.citation.volume5-
dc.citation.issue4-
dc.citation.beginningpage5597-
dc.citation.endingpage5604-
dc.citation.publicationnameIEEE ROBOTICS AND AUTOMATION LETTERS-
dc.identifier.doi10.1109/LRA.2020.3009065-
dc.contributor.localauthorYoon, Kuk-Jin-
dc.description.isOpenAccessN-
dc.type.journalArticleArticle-
dc.subject.keywordAuthorDeep learning for visual perception-
dc.subject.keywordAuthorvisual learning-
Appears in Collection
ME-Journal Papers(저널논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 5 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0