Generation of a bird view image from a pixel-level frontal view image by using a generative adversarial network = GAN을 이용하여 픽셀단위 전면영상에서 상면영상을 생성하는 방법연구

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 61
  • Download : 0
Understanding of traffic scenes robustly as a basis of executing driving strategies and planning routes is a cornerstone for autonomous driving, where a bird view is an essential component to create panoramas of surroundings. Since there is a large gap between bird views and other views, such as the front view, the task of synthesizing the related bird views is quite challenging. Generative adversarial networks (GAN) developed rapidly in recent years have been utilized a minimax game between a generator module and a discriminator module for image conversion and synthesis. Then, this dissertation applies a new framework for the synthesis of bird view for the modern autonomous driving: Firstly, inspired by correspondence between pixels, this dissertation applies a pixel level GAN to achieve one to one generation from a front view to the related bird view. In the generator module, unlike the original GAN, which uses random vectors as input, the proposed method uses an encoder and a decoder. This method directly inputs the image as the source domain, retains the semantic characteristics and is constructed by the convolutional neural network. In the discriminator module, based on the real/fake discriminator, the proposed network add another discriminator, which is called identification discriminator to improve the correlation between the source domain and the target domain, avoiding the loss of identification information. Secondly, we use a dataset which is similar with the autonomous driving scene in the real world from Grand Theft Auto V (GTA5) video game. The camera automatically toggles between front view and bird view at each time step, then packs the paired images with low similarity in the same frame as the training set and test set. In order to output the related bird view, a method for fine-tuning of the network is discussed to design layers, parameters and reasonable epochs of network. Additionally, various front views from more complex scenes are applied for testing, According to the parameter setting, epoch setting and architecture optimization, bird view is generated respectively. Finally, an experimental evaluation is extended based on LPIPS algorithm which contains two modules, one is for calculating distance between image patches while another is for the perceptual loss calculation. The evaluation is combined with the LPIPS algorithm to calculate the difference score between the synthetic image and the real bird view. Compared with other methods, the error is reduced by 40.96% on average. The parallax image is also visualized to build the distance map, then a comprehensive analysis of the pixel level generative adversarial network can be achieved based on the score and the distance map objectively. In summary, the proposed network neither uses complex geometric transformations nor avoids the introduction of multiple intermediate views, which can be applied to the field of autonomous driving to realize the transformation from a front view into a high-resolution bird view under the road environment.
Advisors
Lee, Chang-Heeresearcher이창희researcher
Description
한국과학기술원 :전기및전자공학부,
Publisher
한국과학기술원
Issue Date
2020
Identifier
325007
Language
eng
Description

학위논문(석사) - 한국과학기술원 : 전기및전자공학부, 2020.8,[iii, 35 p. :]

Keywords

autonomous driving technology▼afront view▼abird view▼agenerative adversarial networks▼aLPIPS algorithm; 자율주행▼a전감도▼a조감도▼aGAN▼aLPIPS

URI
http://hdl.handle.net/10203/285092
Link
http://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=925256&flag=dissertation
Appears in Collection
EE-Theses_Master(석사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0