Learning-Based Just-Noticeable-Quantization-Distortion Modeling for Perceptual Video Coding

Cited 53 time in webofscience Cited 0 time in scopus
  • Hit : 887
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorKi, Sehwanko
dc.contributor.authorBae, Sung-Hoko
dc.contributor.authorKim, Munchurlko
dc.contributor.authorKo, Hyunsukko
dc.date.accessioned2018-05-23T06:43:22Z-
dc.date.available2018-05-23T06:43:22Z-
dc.date.created2018-04-30-
dc.date.created2018-04-30-
dc.date.created2018-04-30-
dc.date.issued2018-07-
dc.identifier.citationIEEE TRANSACTIONS ON IMAGE PROCESSING, v.27, no.7, pp.3178 - 3193-
dc.identifier.issn1057-7149-
dc.identifier.urihttp://hdl.handle.net/10203/241555-
dc.description.abstractConventional predictive video coding-based approaches are reaching the limit of their potential coding efficiency improvements, because of severely increasing computation complexity. As an alternative approach, perceptual video coding (PVC) has attempted to achieve high coding efficiency by eliminating perceptual redundancy, using just-noticeable-distortion (JND) directed PVC. The previous JNDs were modeled by adding white Gaussian noise or specific signal patterns into the original images, which were not appropriate in finding JND thresholds due to distortion with energy reduction. In this paper, we present a novel discrete cosine transform-based energy-reduced JND model, called ERJND, that is more suitable for JND-based PVC schemes. Then, the proposed ERJND model is extended to two learning-based just-noticeable-quantization-distortion (JNQD) models as preprocessing that can be applied for perceptual video coding. The two JNQD models can automatically adjust JND levels based on given quantization step sizes. One of the two JNQD models, called LR-JNQD, is based on linear regression and determines the model parameter for JNQD based on extracted handcraft features. The other JNQD model is based on a convolution neural network (CNN), called CNN-JNQD. To our best knowledge, our paper is the first approach to automatically adjust JND levels according to quantization step sizes for preprocessing the input to video encoders. In experiments, both the LR-JNQD and CNN-JNQD models were applied to high efficiency video coding (HEVC) and yielded maximum (average) bitrate reductions of 38.51% (10.38%) and 67.88% (24.91%), respectively, with little subjective video quality degradation, compared with the input without preprocessing applied.-
dc.languageEnglish-
dc.publisherIEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC-
dc.subjectJND MODEL-
dc.subjectIMAGES-
dc.subjectPROFILE-
dc.subjectDIFFERENCE-
dc.titleLearning-Based Just-Noticeable-Quantization-Distortion Modeling for Perceptual Video Coding-
dc.typeArticle-
dc.identifier.wosid000429730000003-
dc.identifier.scopusid2-s2.0-85044389721-
dc.type.rimsART-
dc.citation.volume27-
dc.citation.issue7-
dc.citation.beginningpage3178-
dc.citation.endingpage3193-
dc.citation.publicationnameIEEE TRANSACTIONS ON IMAGE PROCESSING-
dc.identifier.doi10.1109/TIP.2018.2818439-
dc.contributor.localauthorKim, Munchurl-
dc.contributor.nonIdAuthorBae, Sung-Ho-
dc.contributor.nonIdAuthorKo, Hyunsuk-
dc.description.isOpenAccessN-
dc.type.journalArticleArticle-
dc.subject.keywordAuthorJust noticeable distortion (JND)-
dc.subject.keywordAuthorperceptual video coding (PVC)-
dc.subject.keywordAuthorquantization distortion-
dc.subject.keywordAuthorconvolution neural network (CNN)-
dc.subject.keywordAuthorpreprocessing of video coding-
dc.subject.keywordPlusJND MODEL-
dc.subject.keywordPlusIMAGES-
dc.subject.keywordPlusPROFILE-
dc.subject.keywordPlusDIFFERENCE-
Appears in Collection
EE-Journal Papers(저널논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 53 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0