DC Field | Value | Language |
---|---|---|
dc.contributor.author | Noh, DongKi | ko |
dc.contributor.author | Sung, Changki | ko |
dc.contributor.author | Uhm, Teayoung | ko |
dc.contributor.author | Lee, WooJu | ko |
dc.contributor.author | Lim, Hyungtae | ko |
dc.contributor.author | Choi, Jaeseok | ko |
dc.contributor.author | Lee, Kyuewang | ko |
dc.contributor.author | Hong, Dasol | ko |
dc.contributor.author | Um, Daeho | ko |
dc.contributor.author | Chung, Inseop | ko |
dc.contributor.author | Shin, Hochul | ko |
dc.contributor.author | Kim, MinJung | ko |
dc.contributor.author | Kim, Hyoung-Rock | ko |
dc.contributor.author | Baek, SeungMin | ko |
dc.contributor.author | Myung, Hyun | ko |
dc.date.accessioned | 2023-01-28T03:00:42Z | - |
dc.date.available | 2023-01-28T03:00:42Z | - |
dc.date.created | 2023-01-20 | - |
dc.date.created | 2023-01-20 | - |
dc.date.created | 2023-01-20 | - |
dc.date.created | 2023-01-20 | - |
dc.date.issued | 2023-02 | - |
dc.identifier.citation | IEEE ROBOTICS AND AUTOMATION LETTERS, v.8, no.2, pp.1093 - 1100 | - |
dc.identifier.issn | 2377-3766 | - |
dc.identifier.uri | http://hdl.handle.net/10203/304759 | - |
dc.description.abstract | In robotics and computer vision communities, extensive studies have been widely conducted regarding surveillance tasks, including human detection, tracking, and motion recognition with a camera. Additionally, deep learning algorithms are widely utilized in the aforementioned tasks as in other computer vision tasks. Existing public datasets are insufficient to develop learning-based methods that handle various surveillance for outdoor and extreme situations such as harsh weather and low illuminance conditions. Therefore, we introduce a new large-scale outdoor surveillance dataset named e X tremely large-scale M ulti-mod A l S ensor dataset ( X-MAS ) containing more than 500,000 image pairs and the first-person view data annotated by well-trained annotators. Moreover, a single pair contains multi-modal data (e.g. an IR image, an RGB image, a thermal image, a depth image, and a LiDAR scan). This is the first large-scale first-person view outdoor multi-modal dataset focusing on surveillance tasks to the best of our knowledge. We present an overview of the proposed dataset with statistics and present methods of exploiting our dataset with deep learning-based algorithms. | - |
dc.language | English | - |
dc.publisher | IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC | - |
dc.title | X-MAS: Extremely Large-Scale Multi-Modal Sensor Dataset for Outdoor Surveillance in Real Environments | - |
dc.type | Article | - |
dc.identifier.wosid | 000920481000003 | - |
dc.identifier.scopusid | 2-s2.0-85147276509 | - |
dc.type.rims | ART | - |
dc.citation.volume | 8 | - |
dc.citation.issue | 2 | - |
dc.citation.beginningpage | 1093 | - |
dc.citation.endingpage | 1100 | - |
dc.citation.publicationname | IEEE ROBOTICS AND AUTOMATION LETTERS | - |
dc.identifier.doi | 10.1109/lra.2023.3236569 | - |
dc.contributor.localauthor | Myung, Hyun | - |
dc.contributor.nonIdAuthor | Uhm, Teayoung | - |
dc.contributor.nonIdAuthor | Choi, Jaeseok | - |
dc.contributor.nonIdAuthor | Lee, Kyuewang | - |
dc.contributor.nonIdAuthor | Hong, Dasol | - |
dc.contributor.nonIdAuthor | Um, Daeho | - |
dc.contributor.nonIdAuthor | Chung, Inseop | - |
dc.contributor.nonIdAuthor | Shin, Hochul | - |
dc.contributor.nonIdAuthor | Kim, Hyoung-Rock | - |
dc.contributor.nonIdAuthor | Baek, SeungMin | - |
dc.description.isOpenAccess | N | - |
dc.type.journalArticle | Article | - |
dc.subject.keywordAuthor | Surveillance | - |
dc.subject.keywordAuthor | Robots | - |
dc.subject.keywordAuthor | Task analysis | - |
dc.subject.keywordAuthor | Cameras | - |
dc.subject.keywordAuthor | Videos | - |
dc.subject.keywordAuthor | Multimodal sensors | - |
dc.subject.keywordAuthor | Robot vision systems | - |
dc.subject.keywordAuthor | Dataset | - |
dc.subject.keywordAuthor | field robot | - |
dc.subject.keywordAuthor | multi-modal perception | - |
dc.subject.keywordAuthor | surveillance robot | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.