Towards Domain Invariant Single Image Dehazing

Cited 18 time in webofscience Cited 0 time in scopus
  • Hit : 343
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorYoon, Kuk-Jinko
dc.contributor.authorShyam, Pranjayko
dc.contributor.authorKim, Kyung-Sooko
dc.date.accessioned2021-01-28T06:05:08Z-
dc.date.available2021-01-28T06:05:08Z-
dc.date.created2020-12-04-
dc.date.created2020-12-04-
dc.date.created2020-12-04-
dc.date.created2020-12-04-
dc.date.created2020-12-04-
dc.date.issued2021-02-05-
dc.identifier.citationAAAI Conference on Aritificial Intelligence, pp.9657 - 9665-
dc.identifier.issn2159-5399-
dc.identifier.urihttp://hdl.handle.net/10203/280095-
dc.description.abstractPresence of haze in images obscures underlying information, which is undesirable in applications requiring accurate environment information. To recover such an image, a dehazing algorithm should localize and recover affected regions while ensuring consistency between recovered and its neighboring regions. However owing to fixed receptive field of convolutional kernels and non uniform haze distribution, assuring consistency between regions is difficult. In this paper, we utilize an encoder-decoder based network architecture to perform the task of dehazing and integrate an spatially aware channel attention mechanism to enhance features of interest beyond the receptive field of traditional conventional kernels. To ensure performance consistency across diverse range of haze densities, we utilize greedy localized data augmentation mechanism. Synthetic datasets are typically used to ensure a large amount of paired training samples, however the methodology to generate such samples introduces a gap between them and real images while accounting for only uniform haze distribution and overlooking more realistic scenario of nonuniform haze distribution resulting in inferior dehazing performance when evaluated on real datasets. Despite this, the abundance of paired samples within synthetic datasets cannot be ignored. Thus to ensure performance consistency across diverse datasets, we train the proposed network within an adversarial prior-guided framework that relies on a generated image along with its low and high frequency components to determine if properties of dehazed images matches those of ground truth. We preform extensive experiments to validate the dehazing and domain invariance performance of proposed framework across diverse domains and report state-of-the-art (SoTA) results. The source code with pretrained models will be available at https://github.com/PS06/DIDH.-
dc.languageEnglish-
dc.publisherAssociation for the Advancement of Artificial Intelligence-
dc.titleTowards Domain Invariant Single Image Dehazing-
dc.typeConference-
dc.identifier.wosid000681269801037-
dc.identifier.scopusid2-s2.0-85126836684-
dc.type.rimsCONF-
dc.citation.beginningpage9657-
dc.citation.endingpage9665-
dc.citation.publicationnameAAAI Conference on Aritificial Intelligence-
dc.identifier.conferencecountryCN-
dc.identifier.conferencelocationVancouver Convention Centre-
dc.contributor.localauthorYoon, Kuk-Jin-
dc.contributor.localauthorKim, Kyung-Soo-
Appears in Collection
ME-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 18 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0