(A) design and implementation of CNN-based low-complexity super resolution hardware for real-time 4K UHD 60 fps video applications실시간 4K UHD 60fps 영상을 위한 심층 컨볼루션 신경망 기반 저복잡도 초해상화 하드웨어 설계 및 구현

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 389
  • Download : 0
Recently, Ultra high definition (UHD) videos are being prevailed in UHD TV and IPTV services, and smartphone applications. While many high-end TVs and smartphones support 4K UHD video, there are still many video streams with full high-definition (FHD) resolution (1,920×1,080) due to legacy acquisition devices and services. Therefore, a delicate up-scaling technique, which is able to convert low-resolution (LR) contents into high-resolution (HR) ones, is essential, especially when it comes to video up-scaling for 2K FHD to 4K UHD conversions. In this dissertation, we present a novel hardware-friendly super-resolution (SR) method based on convolutional neural networks (CNN) and its dedicated hardware (HW) on Field Programmable Gate Array (FPGA) for up-scaling full-high-definition (FHD) video streams to 4K ultra-high-definition (UHD) video streams in real-time 60 fps. In addition, we propose the CNN-based SR hardware architecture that can be expanded to 3× and 4× up scaling, not just 2× up scaling. Our dedicated CNN-based SR HW, low resolution (LR) input frames are processed line-by-line, and the number of convolutional filter parameters is reduced significantly by incorporating depth-wise separable convolutions with a residual connection. Our CNN-based SR HW incorporates a cascade of 1D convolutions having large receptive fields along horizontal lines while keeping vertical receptive fields minimal, which allows to save required line memory space in achieving comparable SR performance against full 2D convolution operations. For efficient HW implementation, we use a simple and effective quantization method with little peak signal-to-noise ratio (PSNR) degradation. Also, we propose a compression method to efficiently store intermediate feature map data to reduce the number of line memories used in HW. Our CNN-based SR HW implementation on the FPGA generates 4K UHD frames of higher PSNR values at 60 fps and shows better visual quality, compared to conventional CNN-based SR methods that are trained and tested in software.
Advisors
Kim, Munchurlresearcher김문철researcher
Description
한국과학기술원 :전기및전자공학부,
Publisher
한국과학기술원
Issue Date
2019
Identifier
325007
Language
eng
Description

학위논문(박사) - 한국과학기술원 : 전기및전자공학부, 2019.2,[v, 62 p. :]

Keywords

Super-resolution▼a4K UHD▼adeep learning▼aCNN▼areal-time▼aFPGA; 초해상화▼a딥 러닝▼a심층 컨볼루션 신경망▼a실시간▼aFPGA

URI
http://hdl.handle.net/10203/265139
Link
http://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=842505&flag=dissertation
Appears in Collection
EE-Theses_Ph.D.(박사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0