A performance comparison of convolutional neural network-based image denoising methods: The effect of loss functions on low-dose CT images

Cited 59 time in webofscience Cited 0 time in scopus
  • Hit : 322
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorKim, Byeongjoonko
dc.contributor.authorHan, Minahko
dc.contributor.authorShim, Hyunjungko
dc.contributor.authorBaek, Jongdukko
dc.date.accessioned2022-07-04T06:00:19Z-
dc.date.available2022-07-04T06:00:19Z-
dc.date.created2022-07-04-
dc.date.issued2019-09-
dc.identifier.citationMEDICAL PHYSICS, v.46, no.9, pp.3906 - 3923-
dc.identifier.issn0094-2405-
dc.identifier.urihttp://hdl.handle.net/10203/297173-
dc.description.abstractPurpose Convolutional neural network (CNN)-based image denoising techniques have shown promising results in low-dose CT denoising. However, CNN often introduces blurring in denoised images when trained with a widely used pixel-level loss function. Perceptual loss and adversarial loss have been proposed recently to further improve the image denoising performance. In this paper, we investigate the effect of different loss functions on image denoising performance using task-based image quality assessment methods for various signals and dose levels. Methods We used a modified version of U-net that was effective at reducing the correlated noise in CT images. The loss functions used for comparison were two pixel-level losses (i.e., the mean-squared error and the mean absolute error), Visual Geometry Group network-based perceptual loss (VGG loss), adversarial loss used to train the Wasserstein generative adversarial network with gradient penalty (WGAN-GP), and their weighted summation. Each image denoising method was applied to reconstructed images and sinogram images independently and validated using the extended cardiac-torso (XCAT) simulation and Mayo Clinic datasets. In the XCAT simulation, we generated fan-beam CT datasets with four different dose levels (25%, 50%, 75%, and 100% of a normal-dose level) using 10 XCAT phantoms and inserted signals in a test set. The signals had two different shapes (spherical and spiculated), sizes (4 and 12 mm), and contrast levels (60 and 160 HU). To evaluate signal detectability, we used a detection task SNR (tSNR) calculated from a non-prewhitening model observer with an eye filter. We also measured the noise power spectrum (NPS) and modulation transfer function (MTF) to compare the noise and signal transfer properties. Results Compared to CNNs without VGG loss, VGG-loss-based CNNs achieved a more similar tSNR to that of the normal-dose CT for all signals at different dose levels except for a small signal at the 25% dose level. For a low-contrast signal at 25% or 50% dose, adding other losses to the VGG loss showed more improved performance than only using VGG loss. The NPS shapes from VGG-loss-based CNN closely matched that of normal-dose CT images while CNN without VGG loss overly reduced the mid-high-frequency noise power at all dose levels. MTF also showed VGG-loss-based CNN with better-preserved high resolution for all dose and contrast levels. It is also observed that additional WGAN-GP loss helps improve the noise and signal transfer properties of VGG-loss-based CNN. Conclusions The evaluation results using tSNR, NPS, and MTF indicate that VGG-loss-based CNNs are more effective than those without VGG loss for natural denoising of low-dose images and WGAN-GP loss improves the denoising performance of VGG-loss-based CNNs, which corresponds with the qualitative evaluation.-
dc.languageEnglish-
dc.publisherWILEY-
dc.titleA performance comparison of convolutional neural network-based image denoising methods: The effect of loss functions on low-dose CT images-
dc.typeArticle-
dc.identifier.wosid000479514400001-
dc.identifier.scopusid2-s2.0-85070707587-
dc.type.rimsART-
dc.citation.volume46-
dc.citation.issue9-
dc.citation.beginningpage3906-
dc.citation.endingpage3923-
dc.citation.publicationnameMEDICAL PHYSICS-
dc.identifier.doi10.1002/mp.13713-
dc.contributor.localauthorShim, Hyunjung-
dc.contributor.nonIdAuthorKim, Byeongjoon-
dc.contributor.nonIdAuthorHan, Minah-
dc.contributor.nonIdAuthorBaek, Jongduk-
dc.description.isOpenAccessN-
dc.type.journalArticleArticle-
dc.subject.keywordAuthoradversarial loss-
dc.subject.keywordAuthordeep learning-
dc.subject.keywordAuthorfeature-level loss-
dc.subject.keywordAuthorimage denoising-
dc.subject.keywordAuthorlow-dose CT-
dc.subject.keywordAuthormathematical observer-
dc.subject.keywordAuthormodulation transfer function-
dc.subject.keywordAuthornoise power spectrum-
dc.subject.keywordPlusNOISE-
dc.subject.keywordPlusFRAMELETS-
dc.subject.keywordPlusOBSERVER-
dc.subject.keywordPlusSPECTRUM-
Appears in Collection
AI-Journal Papers(저널논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 59 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0