Scene-space video extrapolation장면 정보 기반 비디오 영역 확장 기술

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 349
  • Download : 0
With the popularity of immersive display systems that fill the viewer’s field of view (FOV) entirely, demand for wide FOV content has increased. A video extrapolation technique based on reuse of existing videos is one of the most efficient ways to produce wide FOV content. Extrapolating a video poses a great challenge, however, due to the insufficient amount of cues and information that can be leveraged for the estimation of the extended region. This paper introduces a novel framework that allows the extrapolation of an input video and consequently converts a conventional content into one with wide FOV. The key idea of the proposed approach is to integrate the information from all frames in the input video into each frame. Utilizing the information from all frames is crucial because it is very difficult to achieve the goal with a 2D transformation based approach when parallax caused by camera motion is apparent. Warping guided by scene-space information matches the viewpoints between the different frames. The matched frames are blended to create extended views. Various experiments demonstrate that the results of the proposed method are more visually plausible than those produced using state-of-the-art techniques.
Advisors
Noh, Junyongresearcher노준용researcher
Description
한국과학기술원 :문화기술대학원,
Publisher
한국과학기술원
Issue Date
2018
Identifier
325007
Language
eng
Description

학위논문(박사) - 한국과학기술원 : 문화기술대학원, 2018.2,[vi, 43 p. :]

Keywords

video extrapolation▼aperipheral vision▼aimmersive display▼aimmersive content▼aimage warping; 비디오 확장▼a주변시▼a몰입형 디스플레이▼a몰입형 콘텐츠▼a이미지 왜곡

URI
http://hdl.handle.net/10203/283430
Link
http://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=886613&flag=dissertation
Appears in Collection
GCT-Theses_Ph.D.(박사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0