In many real-world situations, the agent faces challenging environments with sparse rewards due to difficulties in designing rewards. When learning to play chess, it is more challenging for an agent to consider the reward for the mere outcome of the game (i.e. success/failure) rather than to consider multiple bonus rewards for capturing each of its opponent’s pieces. However, a huge combination of moving pieces yields complexity in designing bonus rewards. Moreover, in a multi-agent system, the joint action space grows exponentially as the number of agents increases; this poses a great challenge in the learning process especially in sparse rewards environments. Previous methods solved this problem by exploration. Although previous methods are useful in accelerating learning in MARL with sparse rewards, exploring in the multi-agent systems is still problematic. To address this issue, our paper suggests an innovative method of generating sub-goals from experience replay for each agent with a novel distance function (SEMARL). Specifically, our paper proposes 4 steps: i) Assigning sub-goals from experience replay to agents, ii) Giving individual rewards for agents to reach sub-goals, iii) Defining novel distance function to provide a general form of sub-goal and, iv) Exploring after reaching sub-goals to reduce the state space to explore. The method may be easily adapted to other methods to generate extraordinary performance in StarCraft II micromanagement benchmark (SMAC) compared to other state-of-the-art MARL algorithms.