We propose a knowledge distillation method using feature blurring. We raised a problem of the previous methods which transfers exact value of the positive features. To use necessary information for training a network, we propose a distillation method which transfers the blurred feature. Our method is more simple and has less information loss than distillation methods which transform features to attention maps or encoding vectors. Student network trained by our method have better accuracy and are optimized under less constraints, which was verified in various datasets. In CIFAR-100, our method shows the best performance between several distillation methods. Especially, significant performance improvement was shown if the depth or architecture of networks are different. Our method performs better than our baseline, overhaul distillation in CIFAR-10.