A Pipelined Point Cloud Based Neural Network Processor for 3-D Vision With Large-Scale Max Pooling Layer Prediction

Cited 3 time in webofscience Cited 0 time in scopus
  • Hit : 271
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorIm, Dongseokko
dc.contributor.authorHan, Donghyeonko
dc.contributor.authorKang, Sanghoonko
dc.contributor.authorYoo, Hoi-Junko
dc.date.accessioned2022-02-08T06:42:08Z-
dc.date.available2022-02-08T06:42:08Z-
dc.date.created2022-01-04-
dc.date.created2022-01-04-
dc.date.created2022-01-04-
dc.date.created2022-01-04-
dc.date.issued2022-02-
dc.identifier.citationIEEE JOURNAL OF SOLID-STATE CIRCUITS, v.57, no.2, pp.661 - 670-
dc.identifier.issn0018-9200-
dc.identifier.urihttp://hdl.handle.net/10203/292107-
dc.description.abstractThe point cloud data provides useful geometric information to 3-D intelligent systems such as autonomous driving, 3-D reconstruction, and hand pose estimation (HPE). Many mobile devices have implemented the 3-D intelligent system with their limited hardware resources. However, previous processors were not designed for accelerating the point cloud based neural network (PNN) which consists of sampling-grouping layers (SGLs) and convolution layers (CLs). In this article, a pipelined PNN processor is proposed for low latency PNN-based 3-D intelligent systems in mobile devices. The processor adopts the pipelined heterogeneous architecture to accelerate both SGLs and CLs in PNNs. The window-based sampling-grouping (WSG) algorithm boosts up the throughput x2.34 higher in SGLs by directly sampling and grouping the 3-D point cloud data from the depth image. Furthermore, the max pooling (MP) prediction core (MPPC) predicts the large-scale (64- and 128-to-1) MP layers, which increases the throughput by x1.31 higher. Besides, the MP prediction on the tiled data can hide the latency of the MPPC and solve the bank conflict problem on the in-out memories in the convolution core (CC). As a result, the processor successfully demonstrates the PNN-based HPE system resulting in 4.45 ms of the processing time with 8.24 mm of HPE error and 266 mW of power consumption.-
dc.languageEnglish-
dc.publisherIEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC-
dc.titleA Pipelined Point Cloud Based Neural Network Processor for 3-D Vision With Large-Scale Max Pooling Layer Prediction-
dc.typeArticle-
dc.identifier.wosid000732911000001-
dc.identifier.scopusid2-s2.0-85112215438-
dc.type.rimsART-
dc.citation.volume57-
dc.citation.issue2-
dc.citation.beginningpage661-
dc.citation.endingpage670-
dc.citation.publicationnameIEEE JOURNAL OF SOLID-STATE CIRCUITS-
dc.identifier.doi10.1109/JSSC.2021.3090864-
dc.contributor.localauthorYoo, Hoi-Jun-
dc.description.isOpenAccessN-
dc.type.journalArticleArticle-
dc.subject.keywordAuthorConvolution-
dc.subject.keywordAuthorThree-dimensional displays-
dc.subject.keywordAuthorComputer architecture-
dc.subject.keywordAuthorPrediction algorithms-
dc.subject.keywordAuthorNeural networks-
dc.subject.keywordAuthorIntelligent systems-
dc.subject.keywordAuthorThroughput-
dc.subject.keywordAuthor3-D vision-
dc.subject.keywordAuthorball query (BQ)-
dc.subject.keywordAuthorconvolutional neural network (CNN)-
dc.subject.keywordAuthordeep neural network-
dc.subject.keywordAuthorhand pose estimation (HPE)-
dc.subject.keywordAuthormax pooling (MP) layer prediction-
dc.subject.keywordAuthorpoint cloud based neural network (PNN)-
dc.subject.keywordAuthorsampling-grouping (SG) algorithm-
Appears in Collection
EE-Journal Papers(저널논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 3 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0