High communication overhead is a major bottleneck in federated learning (FL). To overcome this issue, sparsification is utilized in various compression frameworks. Generally, local clients upload the updated weights to the server. However, in sparsification, we observed that local clients upload the difference between the updated weights and the original weights. Our study is to confirm the importance of uploading the difference of weights in sparsification and to observe how different the accuracy between the two schemes is.