(A) study on reference-based image inpainting using transformers with reference attention참조 어텐션 트랜스포머를 이용한 참조 영상 기반 이미지 인페인팅 연구

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 192
  • Download : 0
Image inpainting, one of the most important problems in the field of image restoration, is a technology that fills in masks as naturally as possible, considering the surrounding local information and the global context. Since the development of deep learning, the performance of inpainting has improved rapidly, and the problem situation has become challenging, such as increasing the mask. In that process, it relies more on the results of the generative model and often results different from what users expect in real applications. On the other hand, reference image-based restoration methods have been proposed to solve the problem in a reliable and user-intended manner as there is less information available in the input image even in other single-image tasks. Super-Resolution is one example of the task using reference images, which achieved a greater performance improvement than the conventional ones. Therefore, given the reference image, it is expected that accurate inpainted results can be obtained as the user intended. However, since inpainting should obtain information from the reference image about the masked region in the input image, it is much more difficult to use references than in the case of super-resolution, which simply uses a method of calculating the similarity. Therefore, the existing reference-based inpainting methods have a large limitation in that the reference and input image pairs are almost from the same scenes. Therefore, in this thesis, we propose a method for training a transformer-based model with novel reference attention modules applied on the synthesized reference datasets to be applied in more generalized situations in which the reference images and input images can be allowed to be partially similar. We confirm that the proposed network can successfully utilize reference images as a guide on synthetic datasets to fill the missing regions of the input images and that it outperforms single-image inpainting and video inpainting models even on untrained real datasets.
Advisors
Kim, Munchurlresearcher김문철researcher
Description
한국과학기술원 :전기및전자공학부,
Publisher
한국과학기술원
Issue Date
2023
Identifier
325007
Language
eng
Description

학위논문(석사) - 한국과학기술원 : 전기및전자공학부, 2023.2,[iv, 43 p. :]

Keywords

reference-based inpainting▼adeep learning▼areference-attention▼atransformer▼asynthesized reference dataset; 참조 영상 기반 인페인팅▼a딥러닝▼a참조 어텐션▼a트랜스포머▼a합성 참조 데이터셋

URI
http://hdl.handle.net/10203/309824
Link
http://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=1032897&flag=dissertation
Appears in Collection
EE-Theses_Master(석사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0