ViViD++ : Vision for Visibility Dataset

Cited 21 time in webofscience Cited 0 time in scopus
  • Hit : 732
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorLee, Alex Junhoko
dc.contributor.authorCho, Younggunko
dc.contributor.authorShin, Young-sikko
dc.contributor.authorKim, Ayoungko
dc.contributor.authorMyung, Hyunko
dc.date.accessioned2022-05-12T03:00:20Z-
dc.date.available2022-05-12T03:00:20Z-
dc.date.created2022-05-12-
dc.date.created2022-05-12-
dc.date.issued2022-07-
dc.identifier.citationIEEE ROBOTICS AND AUTOMATION LETTERS, v.7, no.3, pp.6282 - 6289-
dc.identifier.issn2377-3766-
dc.identifier.urihttp://hdl.handle.net/10203/296501-
dc.description.abstractIn this letter, we present a dataset capturing diverse visual data formats that target varying luminance conditions. While RGB cameras provide nourishing and intuitive information, changes in lighting conditions potentially result in catastrophic failure for robotic applications based on vision sensors. Approaches overcoming illumination problems have included developing more robust algorithms or other types of visual sensors, such as thermal and event cameras. Despite the alternative sensors’ potential, there still are few datasets with alternative vision sensors. Thus, we provided a dataset recorded from alternative vision sensors, by handheld or mounted on a car, repeatedly in the same space but in different conditions. We aim to acquire visible information from co-aligned alternative vision sensors. Our sensor system collects data more independently from visible light intensity by measuring the amount of infrared dissipation, depth by structured reflection, and instantaneous temporal changes in luminance. We provide these measurements along with inertial sensors and ground-truth for developing robust visual SLAM under poor illumination.-
dc.languageEnglish-
dc.publisherIEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC-
dc.titleViViD++ : Vision for Visibility Dataset-
dc.typeArticle-
dc.identifier.wosid000790811700003-
dc.identifier.scopusid2-s2.0-85129221071-
dc.type.rimsART-
dc.citation.volume7-
dc.citation.issue3-
dc.citation.beginningpage6282-
dc.citation.endingpage6289-
dc.citation.publicationnameIEEE ROBOTICS AND AUTOMATION LETTERS-
dc.identifier.doi10.1109/lra.2022.3168335-
dc.contributor.localauthorMyung, Hyun-
dc.contributor.nonIdAuthorLee, Alex Junho-
dc.contributor.nonIdAuthorCho, Younggun-
dc.contributor.nonIdAuthorShin, Young-sik-
dc.contributor.nonIdAuthorKim, Ayoung-
dc.description.isOpenAccessN-
dc.type.journalArticleArticle-
dc.subject.keywordAuthorData sets for SLAM-
dc.subject.keywordAuthordata sets for robotic vision-
dc.subject.keywordAuthordata sets for robot learning-
dc.subject.keywordPlusEVENT-CAMERA DATASET-
dc.subject.keywordPlusPLACE RECOGNITION-
dc.subject.keywordPlusODOMETRY-
Appears in Collection
EE-Journal Papers(저널논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 21 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0