Deep introspective SLAM: deep reinforcement learning based approach to avoid tracking failure in visual SLAM

Cited 9 time in webofscience Cited 0 time in scopus
  • Hit : 182
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorNaveed, Kanwalko
dc.contributor.authorAnjum, Muhammad Latifko
dc.contributor.authorHussain, Wajahatko
dc.contributor.authorLee, Donghwanko
dc.date.accessioned2022-08-05T01:00:14Z-
dc.date.available2022-08-05T01:00:14Z-
dc.date.created2022-07-11-
dc.date.created2022-07-11-
dc.date.issued2022-08-
dc.identifier.citationAUTONOMOUS ROBOTS, v.46, no.6, pp.705 - 724-
dc.identifier.issn0929-5593-
dc.identifier.urihttp://hdl.handle.net/10203/297823-
dc.description.abstractReliable and consistent tracking is essential to realize the dream of power-on-and-go autonomy in mobile robots. Our investigation with state-of-the-art visual navigation and mapping tools (e.g. ORB-SLAM) reveals that these tools suffer from frequent and unexpected tracking failures, especially when tested in the wild. This hinders the ability of robots to reach a goal position less than 10 meters away, without tracking failure, thereby limiting the prospects of real autonomy. We present an introspection-based approach (Introspective-SLAM) that enables SLAM to evaluate safety of navigation steps with respect to tracking failure, before the steps are actually taken. Navigation steps that appear unsafe are thereby avoided, and an alternative path to the goal is planned. We propose a novel deep reinforcement learning (DQN) based network to evaluate safety of future navigation steps using a single image only. Surprisingly, training of our DQN completes in a short amount of time (< 60 h). Even then, this network outperforms several handcrafted and Q-learning based pipelines to achieve state-of-the-art performance. Interestingly, training the DQN in realistic simulators (MINOS), consisting of reconstructed interiors, shows good generalization across real world indoor-outdoor settings. Finally, extensive testing of visual SLAM, equipped with our DQN, shows that tracking failures occur frequently and are a major hindrance in reaching the goal. Currently, there is no standard benchmark to evaluate active visual SLAM approaches. We have released a benchmark of 50 episodes with this work. We hope these findings/benchmark will encourage progress for power-on-and-go visual SLAM without any manual supervision.-
dc.languageEnglish-
dc.publisherSPRINGER-
dc.titleDeep introspective SLAM: deep reinforcement learning based approach to avoid tracking failure in visual SLAM-
dc.typeArticle-
dc.identifier.wosid000813592300001-
dc.identifier.scopusid2-s2.0-85132160481-
dc.type.rimsART-
dc.citation.volume46-
dc.citation.issue6-
dc.citation.beginningpage705-
dc.citation.endingpage724-
dc.citation.publicationnameAUTONOMOUS ROBOTS-
dc.identifier.doi10.1007/s10514-022-10046-9-
dc.contributor.localauthorLee, Donghwan-
dc.contributor.nonIdAuthorNaveed, Kanwal-
dc.contributor.nonIdAuthorAnjum, Muhammad Latif-
dc.contributor.nonIdAuthorHussain, Wajahat-
dc.description.isOpenAccessN-
dc.type.journalArticleArticle-
dc.subject.keywordAuthorReinforcement learning-
dc.subject.keywordAuthorIntrospection-
dc.subject.keywordAuthorVisual SLAM-
dc.subject.keywordAuthorRobot navigation-
dc.subject.keywordPlusDOMAIN-
Appears in Collection
EE-Journal Papers(저널논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 9 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0