DSPU: A 281.6mW Real-Time Depth Signal Processing Unit for Deep Learning-Based Dense RGB-D Data Acquisition with Depth Fusion and 3D Bounding Box Extraction in Mobile Platforms

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 52
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorIm, DongSeokko
dc.contributor.authorPark, Gwangtaeko
dc.contributor.authorLI, ZHIYONGko
dc.contributor.authorRyu, Junhako
dc.contributor.authorKang, Sanghoonko
dc.contributor.authorHan, Donghyeonko
dc.contributor.authorLee, Jinsuko
dc.contributor.authorYoo, Hoi-Junko
dc.date.accessioned2022-11-17T05:01:08Z-
dc.date.available2022-11-17T05:01:08Z-
dc.date.created2022-09-27-
dc.date.created2022-09-27-
dc.date.issued2022-02-
dc.identifier.citation2022 IEEE International Solid-State Circuits Conference, ISSCC 2022, pp.510 - 512-
dc.identifier.issn0193-6530-
dc.identifier.urihttp://hdl.handle.net/10203/299786-
dc.description.abstractEmerging mobile platforms, such as autonomous robots and AR devices, require RGBD data and 3D bounding-box (BB) information for accurate navigation and seamless interaction with the surrounding environment. Specifically, the extraction of RGB-D data and 3D BB needs to be done in real-time (> 30fps) while consuming low power (< 1W) due to limited battery capacity. In addition, a conventional depth processing system consumes high power due to a high performance (HP) time-of-flight (ToF) sensor with an illuminator (> 3W) [1]. However, even the HP ToF fails to extract depth in areas of extreme reflectance, leading to failure in navigation or AR interaction. In addition, software implementation on an application processor suffers from high latency ( 0.1 s) to preprocess the depth data and process the 3D point cloud-based neural network (PNN) [2]. Therefore, this paper proposes an SoC for low-power and low-latency depth estimation and 3D object detection with high accuracy, as shown in Fig. 33.4.1. The system implements depth fusion [3], [4] to allow accurate RGB-D extraction without hollows, while using a low-power (LP) ToF sensor (<0.4W). The SoC can fully accelerate the depth-processing pipeline, achieving a maximum of 45.6fps.-
dc.languageEnglish-
dc.publisherInstitute of Electrical and Electronics Engineers Inc.-
dc.titleDSPU: A 281.6mW Real-Time Depth Signal Processing Unit for Deep Learning-Based Dense RGB-D Data Acquisition with Depth Fusion and 3D Bounding Box Extraction in Mobile Platforms-
dc.typeConference-
dc.identifier.scopusid2-s2.0-85128307668-
dc.type.rimsCONF-
dc.citation.beginningpage510-
dc.citation.endingpage512-
dc.citation.publicationname2022 IEEE International Solid-State Circuits Conference, ISSCC 2022-
dc.identifier.conferencecountryUS-
dc.identifier.conferencelocationSan Francisco-
dc.identifier.doi10.1109/ISSCC42614.2022.9731699-
dc.contributor.localauthorYoo, Hoi-Jun-
dc.contributor.nonIdAuthorKang, Sanghoon-
dc.contributor.nonIdAuthorLee, Jinsu-
Appears in Collection
EE-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0