This paper considers the problem of a learning air combat maneuver model when an expert pilot’s trajectories are given. Most studies of imitation learning require large amount of data for training and have to interact with real environments, even under uncertain dynamics of enemy aircraft. Thus, we propose a new approach to solve this problem by: (i) training an internal model that can represent future states and imitate the maneuvering of an expert using MDN-RNN and a controller and (ii) generating expert-like trajectories via a dreaming process, which imagines an engagement situation in a hypothetical environment model. This approach does not require interaction with the real environment nor a reward function for training. We demonstrate the similarity between the expert trajectory and the trajectory reconstructed by the proposed model.