This paper presents Z-PIM, an energy-efficient processing-in-memory (PIM) architecture that supports zero-skipping operations and fully-variable weight bit-precision for efficient deep neural network (DNN). The 8T-SRAM cell based bit-serial operation with hierarchical bit-line structure enables variable weight precision and reduces bit-line switching by 95.42% in convolution layers of VGG-16. Z-PIM handles abundant zeros in weight data by skip-reading their corresponding input data while read-sequence rearranging and pipelining improves throughput by 66.1%. In addition, diagonal accumulation logic is proposed to accumulate both partial-sums for bit-serial operation and spatial products. As a result, the Z-PIM chip fabricated in a 65nm process consumes average 5.294mW power and achieves 0.31-49.12 TOPS/W energy efficiency for convolution operations as sparsity and weight bit-precision vary from 0.1 to 0.9 and 1b to 16b, respectively.