Category-specific Salient View Selection via Deep Convolutional Neural Networks

Cited 9 time in webofscience Cited 0 time in scopus
  • Hit : 518
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorKim, Seong-Heumko
dc.contributor.authorTai, Yu-Wingko
dc.contributor.authorLee, Joon-Youngko
dc.contributor.authorPark, Jaesikko
dc.contributor.authorKweon, In-Soko
dc.date.accessioned2018-01-30T04:20:07Z-
dc.date.available2018-01-30T04:20:07Z-
dc.date.created2016-12-29-
dc.date.created2016-12-29-
dc.date.issued2017-12-
dc.identifier.citationCOMPUTER GRAPHICS FORUM, v.36, no.8, pp.313 - 328-
dc.identifier.issn0167-7055-
dc.identifier.urihttp://hdl.handle.net/10203/238822-
dc.description.abstractIn this paper, we present a new framework to determine up front orientations and detect salient views of 3D models. The salient viewpoint to human preferences is the most informative projection with correct upright orientation. Our method utilizes two Convolutional Neural Network (CNN) architectures to encode category-specific information learnt from a large number of 3D shapes and 2D images on the web. Using the first CNN model with 3D voxel data, we generate a CNN shape feature to decide natural upright orientation of 3D objects. Once a 3D model is upright-aligned, the front projection and salient views are scored by category recognition using the second CNN model. The second CNN is trained over popular photo collections from internet users. In order to model comfortable viewing angles of 3D models, a category-dependent prior is also learnt from the users. Our approach effectively combines category-specific scores and classical evaluations to produce a data-driven viewpoint saliency map. The best viewpoints from the method are quantitatively and qualitatively validated with more than 100 objects from 20 categories. Our thumbnail images of 3D models are the most favoured among those from different approaches.-
dc.languageEnglish-
dc.publisherWILEY-BLACKWELL-
dc.subjectMENTAL ROTATION-
dc.subject3D SHAPES-
dc.subjectOBJECTS-
dc.subjectORIENTATION-
dc.subjectMODELS-
dc.subjectIMAGE-
dc.titleCategory-specific Salient View Selection via Deep Convolutional Neural Networks-
dc.typeArticle-
dc.identifier.wosid000417496200022-
dc.identifier.scopusid2-s2.0-85013249519-
dc.type.rimsART-
dc.citation.volume36-
dc.citation.issue8-
dc.citation.beginningpage313-
dc.citation.endingpage328-
dc.citation.publicationnameCOMPUTER GRAPHICS FORUM-
dc.identifier.doi10.1111/cgf.13082-
dc.contributor.localauthorKweon, In-So-
dc.contributor.nonIdAuthorKim, Seong-Heum-
dc.contributor.nonIdAuthorTai, Yu-Wing-
dc.contributor.nonIdAuthorLee, Joon-Young-
dc.contributor.nonIdAuthorPark, Jaesik-
dc.description.isOpenAccessN-
dc.type.journalArticleArticle-
dc.subject.keywordAuthorbest view selection-
dc.subject.keywordAuthorupright orientation estimation-
dc.subject.keywordAuthordeep learning-
dc.subject.keywordPlusMENTAL ROTATION-
dc.subject.keywordPlus3D SHAPES-
dc.subject.keywordPlusOBJECTS-
dc.subject.keywordPlusORIENTATION-
dc.subject.keywordPlusMODELS-
dc.subject.keywordPlusIMAGE-
Appears in Collection
EE-Journal Papers(저널논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 9 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0