This study explores a new method for generating virtual characters in video for use in virtual and augmented reality services. We propose F2RPC as a unified framework for improving both the appearance and motion of the characters, unlike previous method focusing only on one or the other. Specifically, our F2RPC consists of two modules for image destylization (AdaGPEN) and face reenactment (PCGPEN), respectively. Experimental results show that ours successfully solves the task compared to the combination of state-of-the-art destylization and reenactment method.