DC Field | Value | Language |
---|---|---|
dc.contributor.author | Kim, Won Dong | ko |
dc.contributor.author | Yang, Sanghoon | ko |
dc.contributor.author | Kim, Woojong | ko |
dc.contributor.author | Kim, Jeong-Jung | ko |
dc.contributor.author | Kim, Chang-Hyun | ko |
dc.contributor.author | Kim, Jung | ko |
dc.date.accessioned | 2023-07-13T02:00:28Z | - |
dc.date.available | 2023-07-13T02:00:28Z | - |
dc.date.created | 2023-07-13 | - |
dc.date.created | 2023-07-13 | - |
dc.date.issued | 2023-08 | - |
dc.identifier.citation | IEEE ROBOTICS AND AUTOMATION LETTERS, v.8, no.8, pp.4481 - 4488 | - |
dc.identifier.issn | 2377-3766 | - |
dc.identifier.uri | http://hdl.handle.net/10203/310470 | - |
dc.description.abstract | Data-driven methods have been successfully applied to images from vision-based tactile sensors to fulfill various manipulation tasks. Nevertheless, these methods remain inefficient because of the lack of methods for simulating the sensors. Relevant research on simulating vision-based tactile sensors generally focus on generating images without markers, owing to the challenges in accurately generating marker motions caused by elastomer deformation. This disallows access to tactile information deducible from markers. In this letter, we propose a generative adversarial network (GAN)-based method to generate realistic marker-embedded tactile images in GelSight-like vision-based tactile sensors. We trained the proposed GAN model with an aligned real tactile and simulated depth image dataset obtained from deforming the sensor against various objects. This allows the model to translate simulated depth image sequences into RGB tactile images with markers. Furthermore, the generator in the proposed GAN allows the network to integrate the history of deformations from the depth image sequences to generate realistic marker motions during the normal and lateral sensor deformations. We evaluated and compared the positional accuracy of the markers and image similarity metrics of the images generated via our method with those from prior methods. The generated tactile images from the proposed model show a 28.3% decrease in marker positional error and a 93.5% decrease in the image similarity metric (MSE) compared to those generated by previous methods, validating the effectiveness of our approach. | - |
dc.language | English | - |
dc.publisher | IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC | - |
dc.title | Marker-Embedded Tactile Image Generation via Generative Adversarial Networks | - |
dc.type | Article | - |
dc.identifier.wosid | 001012677300007 | - |
dc.identifier.scopusid | 2-s2.0-85162722836 | - |
dc.type.rims | ART | - |
dc.citation.volume | 8 | - |
dc.citation.issue | 8 | - |
dc.citation.beginningpage | 4481 | - |
dc.citation.endingpage | 4488 | - |
dc.citation.publicationname | IEEE ROBOTICS AND AUTOMATION LETTERS | - |
dc.identifier.doi | 10.1109/LRA.2023.3284370 | - |
dc.contributor.localauthor | Kim, Jung | - |
dc.contributor.nonIdAuthor | Yang, Sanghoon | - |
dc.contributor.nonIdAuthor | Kim, Jeong-Jung | - |
dc.contributor.nonIdAuthor | Kim, Chang-Hyun | - |
dc.description.isOpenAccess | N | - |
dc.type.journalArticle | Article | - |
dc.subject.keywordAuthor | Deep learning methods | - |
dc.subject.keywordAuthor | force and tactile sensing | - |
dc.subject.keywordAuthor | simulation and animation | - |
dc.subject.keywordPlus | TO-REAL TRANSFER | - |
dc.subject.keywordPlus | DOMAIN ADAPTATION | - |
dc.subject.keywordPlus | PERCEPTION | - |
dc.subject.keywordPlus | SENSORS | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.