An energy efficient Deep-Neural-Network (DNN) learning processor is proposed for on-chip learning and iterative weight pruning (WP). This work has three key features: 1) stochastic coarse-fine pruning reduced computation workload by 99.7% compared with previous WP algorithm while maintaining high weight sparsity, 2) adaptive input/output/weight skipping (AIOWS) achieved 30.1× higher throughput than previous DNN learning processor [1] for not only the inference but also learning, 3) weight memory shared pruning unit removed on-chip weight memory access for WP. As a result, this work shows 146.52 TOPS/W energy efficiency, which is 5.79× higher than the state-of-the-art [1].