Robust and Efficient Estimation of Absolute Camera Pose for Monocular Visual Odometry

Cited 1 time in webofscience Cited 0 time in scopus
  • Hit : 62
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorLi, Haoangko
dc.contributor.authorChen, Wenko
dc.contributor.authorZhao, Jiko
dc.contributor.authorBazin, Jean-Charlesko
dc.contributor.authorLuo, Leiko
dc.contributor.authorLiu, Zheko
dc.contributor.authorLiu, Yun-Huiko
dc.date.accessioned2023-08-15T06:00:27Z-
dc.date.available2023-08-15T06:00:27Z-
dc.date.created2023-07-07-
dc.date.created2023-07-07-
dc.date.issued2020-05-
dc.identifier.citation2020 IEEE International Conference on Robotics and Automation, ICRA 2020, pp.2675 - 2681-
dc.identifier.issn1050-4729-
dc.identifier.urihttp://hdl.handle.net/10203/311535-
dc.description.abstractGiven a set of 3D-to-2D point correspondences corrupted by outliers, we aim to robustly estimate the absolute camera pose. Existing methods robust to outliers either fail to guarantee high robustness and efficiency simultaneously, or require an appropriate initial pose and thus lack generality. In contrast, we propose a novel approach based on the robust L2-minimizing estimate (L2E) loss. We first define a novel cost function by integrating the projection constraint into the L2E loss. Then to efficiently obtain the global minimum of this function, we propose a hybrid strategy of a local optimizer and branch-and-bound. For branch-and-bound, we derive effective function bounds. Our approach can handle high outlier ratios, leading to high robustness. It can run reliably regardless of whether the initial pose is appropriate, providing high generality. Moreover, given a decent initial pose, it is suitable for real-time applications. Experiments on synthetic and real-world datasets showed that our approach outperforms state-of-the-art methods in terms of robustness and/or efficiency.-
dc.languageEnglish-
dc.publisherInstitute of Electrical and Electronics Engineers Inc.-
dc.titleRobust and Efficient Estimation of Absolute Camera Pose for Monocular Visual Odometry-
dc.typeConference-
dc.identifier.wosid000712319501141-
dc.identifier.scopusid2-s2.0-85092710109-
dc.type.rimsCONF-
dc.citation.beginningpage2675-
dc.citation.endingpage2681-
dc.citation.publicationname2020 IEEE International Conference on Robotics and Automation, ICRA 2020-
dc.identifier.conferencecountryFR-
dc.identifier.conferencelocationParis-
dc.identifier.doi10.1109/ICRA40945.2020.9196814-
dc.contributor.localauthorBazin, Jean-Charles-
dc.contributor.nonIdAuthorLi, Haoang-
dc.contributor.nonIdAuthorChen, Wen-
dc.contributor.nonIdAuthorZhao, Ji-
dc.contributor.nonIdAuthorLuo, Lei-
dc.contributor.nonIdAuthorLiu, Zhe-
dc.contributor.nonIdAuthorLiu, Yun-Hui-
Appears in Collection
GCT-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 1 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0