Deep Learning typically learns the parameters of neural network models using large amounts of data of a given task. However, in recent years, researchers conducted a study to improve the performance of multi-task learning problems based on a small amount of data. Among them, optimization-based meta-learning is a method of learning parameters of new tasks using optimization-based fine-tuning from learned initial parameters. However, there is a lack of an analytic approach that considers the correlation between tasks from the perspective of game theory. In this study, we propose a task-specific optimization problem using joint constraints. We then present the generalized Nash equilibrium between tasks using the variational equilibrium and present a new meta-learning algorithm. Finally, in the simulations, the performance of the proposed model is compared with the existing optimization-based meta-learning model by the sinusoidal regression problem.