Radar-to-photometric image translation using context-aware, misalignment-robust and confidence-guided generative models컨텍스트 인지적, 비정합 강건적 및 신뢰성 유도형 생성형 모델을 이용한 레이더 영상으로부터 광학 영상으로의 변환 연구

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 39
  • Download : 0
DC FieldValueLanguage
dc.contributor.advisor김문철-
dc.contributor.authorLee, Jaehyup-
dc.contributor.author이재협-
dc.date.accessioned2024-08-08T19:31:33Z-
dc.date.available2024-08-08T19:31:33Z-
dc.date.issued2024-
dc.identifier.urihttp://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=1100042&flag=dissertationen_US
dc.identifier.urihttp://hdl.handle.net/10203/322142-
dc.description학위논문(박사) - 한국과학기술원 : 전기및전자공학부, 2024.2,[ix, 124 p. :]-
dc.description.abstractSatellite synthetic aperture radar (SAR) images are immensely valuable because they can be obtained regardless of weather and time conditions. However, SAR images have fatal noise and less contextual information, thus making them harder and less interpretable. Thus, translation of SAR to electro-optical (EO) images is highly required for easier interpretation. In this article, we propose a novel coarse-to-fine context-aware SAR-to-EO translation (CFCA-SET) framework and a misalignment-resistant (MR) loss for the misaligned pairs of SAR-EO images. With our auxiliary learning of SAR-to-near-infrared translation, CFCA-SET consists of a two-stage training: 1) the low-resolution SAR-to-EO translation is learned in the coarse stage via a local self-attention module that helps diminish the SAR noise and 2) the resulting output is used as guidance in the fine stage to generate the SAR colorization of high resolution. Our proposed auxiliary learning of SAR-to-NIR translation can successfully lead CFCA-SET to learn distinguishable characteristics of various SAR objects with less confusion in a context-aware manner. To handle the inevitable misalignment problem between SAR and EO images, we newly designed an MR loss function. We also use a pre-trained segmentation network to provide the segmentation regions with their labels into learning the SET. Our ECFCA-SET can be trained to effectively learn the translation for the regions of confusing contexts by utilizing the segmentation and ㅣlocally adaptive confidence mask loss function. Furthermore, due to the difficulty of SAR-EO pair image dataset collection, we propose a novel SAR-EO data augmentation strategy via diffusion process. Extensive experimental results show that our CFCA-SET can generate more recognizable and understandable EO-like images compared to other methods in terms of nine image quality metrics.-
dc.languageeng-
dc.publisher한국과학기술원-
dc.subject위성 합성 개구 레이더 (SAR)▼a광학 (EO)▼a영상 변환 알고리듬-
dc.subjectRadar-to-photometric image translation using context-aware▼aMisalignment-robust and confidence-guided generative models▼aSAR-
dc.titleRadar-to-photometric image translation using context-aware, misalignment-robust and confidence-guided generative models-
dc.title.alternative컨텍스트 인지적, 비정합 강건적 및 신뢰성 유도형 생성형 모델을 이용한 레이더 영상으로부터 광학 영상으로의 변환 연구-
dc.typeThesis(Ph.D)-
dc.identifier.CNRN325007-
dc.description.department한국과학기술원 :전기및전자공학부,-
dc.contributor.alternativeauthorKim, Munchurl-
Appears in Collection
EE-Theses_Ph.D.(박사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0