DC Field | Value | Language |
---|---|---|
dc.contributor.author | Kim, Hak Gu | ko |
dc.contributor.author | Lim, Heoun-taek | ko |
dc.contributor.author | Ro, Yong Man | ko |
dc.date.accessioned | 2020-04-23T01:20:05Z | - |
dc.date.available | 2020-04-23T01:20:05Z | - |
dc.date.created | 2019-02-20 | - |
dc.date.created | 2019-02-20 | - |
dc.date.created | 2019-02-20 | - |
dc.date.created | 2019-02-20 | - |
dc.date.created | 2019-02-20 | - |
dc.date.issued | 2020-04 | - |
dc.identifier.citation | IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, v.30, no.4, pp.917 - 928 | - |
dc.identifier.issn | 1051-8215 | - |
dc.identifier.uri | http://hdl.handle.net/10203/273989 | - |
dc.description.abstract | In this paper, we propose a novel deep learning-based virtual reality image quality assessment method that automatically predicts the visual quality of an omnidirectional image. In order to assess the visual quality in viewing the omnidirectional image, we propose deep networks consisting of virtual reality (VR) quality score predictor and human perception guider. The proposed VR quality score predictor learns the positional and visual characteristics of the omnidirectional image by encoding the positional feature and visual feature of a patch on the omnidirectional image. With the encoded positional feature and visual feature, patch weight and patch quality score are estimated. Then, by aggregating all weights and scores of the patches, the image quality score is predicted. The proposed human perception guider evaluates the predicted quality score by referring to the human subjective score (i.e., ground-truth obtained by subjects) using an adversarial learning. With adversarial learning, the VR quality score predictor is trained to accurately predict the quality score in order to deceive the guider, while the proposed human perception guider is trained to precisely distinguish between the predictor score and the ground-truth subjective score. To verify the performance of the proposed method, we conducted comprehensive subjective experiments and evaluated the performance of the proposed method. The experimental results show that the proposed method outperforms the existing two-dimentional image quality models and the state-of-the-art image quality models for omnidirectional images. | - |
dc.language | English | - |
dc.publisher | IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC | - |
dc.title | Deep Virtual Reality Image Quality Assessment with Human Perception Guider for Omnidirectional Image | - |
dc.type | Article | - |
dc.identifier.wosid | 000561099300002 | - |
dc.identifier.scopusid | 2-s2.0-85083077544 | - |
dc.type.rims | ART | - |
dc.citation.volume | 30 | - |
dc.citation.issue | 4 | - |
dc.citation.beginningpage | 917 | - |
dc.citation.endingpage | 928 | - |
dc.citation.publicationname | IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY | - |
dc.identifier.doi | 10.1109/TCSVT.2019.2898732 | - |
dc.contributor.localauthor | Ro, Yong Man | - |
dc.contributor.nonIdAuthor | Lim, Heoun-taek | - |
dc.description.isOpenAccess | N | - |
dc.type.journalArticle | Article | - |
dc.subject.keywordAuthor | Visualization | - |
dc.subject.keywordAuthor | Image quality | - |
dc.subject.keywordAuthor | Measurement | - |
dc.subject.keywordAuthor | Image coding | - |
dc.subject.keywordAuthor | Distortion | - |
dc.subject.keywordAuthor | Deep learning | - |
dc.subject.keywordAuthor | Quality assessment | - |
dc.subject.keywordAuthor | Adversarial learning | - |
dc.subject.keywordAuthor | deep learning | - |
dc.subject.keywordAuthor | omnidirectional image | - |
dc.subject.keywordAuthor | quality assessment | - |
dc.subject.keywordAuthor | virtual reality | - |
dc.subject.keywordPlus | COMPRESSION | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.