Convolutional neural network (CNN) processors that exploit analog computing for high energy efficiency suffer from two major issues. First, frequent data conversions between the layers limit energy efficiency. Second, computing errors occur from analog circuits since they are vulnerable to process, voltage, and temperature (PVT) variations. In this article, a CNN processor featuring a variation-tolerant analog datapath with analog memory (AMEM) is proposed so that data conversion is not needed. To minimize the computing error, both AMEM and ANU are designed in such a way that their performance is not affected by PVT variations. In addition, a variation compensating circuit is also proposed. Prototype implemented in 28-nm complementary metal-oxide-semiconductor (CMOS) achieves energy-efficiency of 437.9 TOPS/W in the analog dat-apath, 44.2 TOPS/W in the total system and maintains its classification accuracy to within 0.5%p across variations of +/- 10% in supply voltage and -20 degree celsius to 85 degree celsius in temperature.