Interest in user modeling has surged within the industry due to its ability to create a low-dimensional representation of users by analyzing their previous behaviors. This approach is sought after for delivering personalized services to users. Earlier endeavors in user modeling primarily concentrate on acquiring task-specific user representations tailored to individual tasks. Recognizing the impracticality of developing task-specific user representations for every task, recent research introduces the concept of a universal user representation-a more generalized user representation applicable across a diverse range of tasks. Despite their effectiveness, existing approaches for learning universal user representations are impractical in real-world applications due to the progression of the task, it neglects to consider the passage of time. In this paper, we propose a novel continual user representation learning method, that facilitates positive knowledge transfer between tasks particularly in scenarios where the distribution of data changes over time. The main idea is to introduce a novel selective forward knowledge transfer module with pseudo-representing strategy that successfully alleviates the long-standing problem of continual learning, i.e., catastrophic forgetting. Moreover, we introduce a selective backward knowledge transfer module, which select a user behavior sequence containing a transformed distribution from past tasks enables the adaptation of past tasks to the current data distribution. Extensive experiments on public real-world datasets demonstrate the superiority and practicality of out model.