Deep Virtual Reality Image Quality Assessment with Human Perception Guider for Omnidirectional Image

Cited 101 time in webofscience Cited 63 time in scopus
  • Hit : 599
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorKim, Hak Guko
dc.contributor.authorLim, Heoun-taekko
dc.contributor.authorRo, Yong Manko
dc.date.accessioned2020-04-23T01:20:05Z-
dc.date.available2020-04-23T01:20:05Z-
dc.date.created2019-02-20-
dc.date.created2019-02-20-
dc.date.created2019-02-20-
dc.date.created2019-02-20-
dc.date.created2019-02-20-
dc.date.issued2020-04-
dc.identifier.citationIEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, v.30, no.4, pp.917 - 928-
dc.identifier.issn1051-8215-
dc.identifier.urihttp://hdl.handle.net/10203/273989-
dc.description.abstractIn this paper, we propose a novel deep learning-based virtual reality image quality assessment method that automatically predicts the visual quality of an omnidirectional image. In order to assess the visual quality in viewing the omnidirectional image, we propose deep networks consisting of virtual reality (VR) quality score predictor and human perception guider. The proposed VR quality score predictor learns the positional and visual characteristics of the omnidirectional image by encoding the positional feature and visual feature of a patch on the omnidirectional image. With the encoded positional feature and visual feature, patch weight and patch quality score are estimated. Then, by aggregating all weights and scores of the patches, the image quality score is predicted. The proposed human perception guider evaluates the predicted quality score by referring to the human subjective score (i.e., ground-truth obtained by subjects) using an adversarial learning. With adversarial learning, the VR quality score predictor is trained to accurately predict the quality score in order to deceive the guider, while the proposed human perception guider is trained to precisely distinguish between the predictor score and the ground-truth subjective score. To verify the performance of the proposed method, we conducted comprehensive subjective experiments and evaluated the performance of the proposed method. The experimental results show that the proposed method outperforms the existing two-dimentional image quality models and the state-of-the-art image quality models for omnidirectional images.-
dc.languageEnglish-
dc.publisherIEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC-
dc.titleDeep Virtual Reality Image Quality Assessment with Human Perception Guider for Omnidirectional Image-
dc.typeArticle-
dc.identifier.wosid000561099300002-
dc.identifier.scopusid2-s2.0-85083077544-
dc.type.rimsART-
dc.citation.volume30-
dc.citation.issue4-
dc.citation.beginningpage917-
dc.citation.endingpage928-
dc.citation.publicationnameIEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY-
dc.identifier.doi10.1109/TCSVT.2019.2898732-
dc.contributor.localauthorRo, Yong Man-
dc.contributor.nonIdAuthorLim, Heoun-taek-
dc.description.isOpenAccessN-
dc.type.journalArticleArticle-
dc.subject.keywordAuthorVisualization-
dc.subject.keywordAuthorImage quality-
dc.subject.keywordAuthorMeasurement-
dc.subject.keywordAuthorImage coding-
dc.subject.keywordAuthorDistortion-
dc.subject.keywordAuthorDeep learning-
dc.subject.keywordAuthorQuality assessment-
dc.subject.keywordAuthorAdversarial learning-
dc.subject.keywordAuthordeep learning-
dc.subject.keywordAuthoromnidirectional image-
dc.subject.keywordAuthorquality assessment-
dc.subject.keywordAuthorvirtual reality-
dc.subject.keywordPlusCOMPRESSION-
Appears in Collection
EE-Journal Papers(저널논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 101 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0