VR/AR and media industry are on the rise. Consequently, post-production demand to achieve a level of visual perfection is increasing. One of the most common post-production techniques is head swapping, where the target head from a scene is replaced with a source head while preserving the original head’s pose, facial expression, and lighting. Usage of head swapping includes bringing back the deceased to the screen with stand-ins, action scenes with stunt performers, and VR or AR scenes for virtual human creation. To perform head swapping, present techniques necessitate a costly equipment, expert human intervention, and meticulous planning at the production stage. This underscores an urgent need for a more efficient, cost-effective solution that can streamline the head swapping process. Leveraging StyleGAN's ability to disentangle facial features, we propose the first pre-trained generative model to reduce the inference time significantly compared to the state-of-the-art head swapping solution. Our solution produces promising results with seamless compositing of the source head to the target frame by introducing a novel background blending optimization method and an ambient light correction module. It is envisaged that this research will shed light on this less studied topic, fostering further scholarly discourse in the realm of head swapping research.