This paper proposed a new high-performant differentially private machine learning framework. It reduces the overall memory usage and increases training throughput while providing a mathematically same result. The proposed framework consisted of two major components, which are example-wise weight gradients computation and adaptive clipping. By implementing an end-to-end DP-SGD framework which utilizes these components, it is shown that a new framework can reduces the memory usage and increases training throughput.