Deep video inpainting guided by audio-visual self-supervision시청각적 자기지도를 통한 심층 비디오 인페인팅

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 60
  • Download : 0
Humans can easily imagine a scene from auditory information based on their prior knowledge of audio-visual events. In this paper, we mimic this innate human ability in deep learning models to improve the quality of video inpainting. To implement the prior knowledge, we first train the audio-visual network to learn the correspondence between auditory and visual information. Then, the audio-visual network is employed as a guider that conveys the prior knowledge of audio-visual correspondence to the video inpainting network. This prior knowledge is transferred through our proposed two novel losses – audio-visual attention loss and audio-visual pseudo-class consistency loss – that further improve the performance of the video inpainting network. These two losses encourage the inpainting result to have a high correspondence to its synchronized audio. Experimental results demonstrate that our proposed method can restore a wider domain of video scenes and is particularly effective when the sounding object in the scene is partially blinded. This thesis is based on the author’s original paper [1].
Advisors
Yoon, Sung-Euiresearcher윤성의researcher
Description
한국과학기술원 :전산학부,
Publisher
한국과학기술원
Issue Date
2022
Identifier
325007
Language
eng
Description

학위논문(석사) - 한국과학기술원 : 전산학부, 2022.2,[iv, 21 p. :]

URI
http://hdl.handle.net/10203/309555
Link
http://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=997568&flag=dissertation
Appears in Collection
CS-Theses_Master(석사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0