Understanding traffic scenes robustly is a cornerstone for autonomous driving, where bird view is an essential component to create panoramas of surroundings. However, since a large gap of two domains in terms of color, types of cars, landmarks, and occlusions, the task of synthesizing the associated bird view from a front view is quite challenging. Therefore, we propose a different framework for a bird view generation, where our approach employs a network that contains one generator and two discriminators. The generator consists of an encoder and a decoder, the real/fake discriminator inspired by the original GAN, and the identification discriminator has been designed to improve relevance between the source and the target domains. When compared with other previous methods, our approach utilizes neither geometry-based transformation nor an intermediate view approach. Our proposed network successfully synthesizes the associated bird view from a front view with sharper details and higher accuracy.