Singing techniques are used for expressive vocal performances by employing temporal fluctuations of timbre, pitch, and other components of the voice. In this study, we compare the performances of hand-crafted features and automat-ically extracted features using deep learning methods to identify different singing techniques. Hand-crafted acoustic features are based on expert knowledge of singing voice whereas the deep learning methods take low-level feature representations, such as spectrograms and raw waveforms, as inputs and learn features automatically using convolutional neural networks (CNNs). These extracted features are used as an input to the random forest classifier for comparison with the hand-crafted features for 10-class singing technique classification. We show that the CNN-based features outperform the hand-crafted features in terms of classification accuracy. Furthermore, we explore various time-frequency representations as an input to the CNNs. We show that the best performing input is multi-resolution short-time Fourier Transform (STFTs), when the CNN kernels are oblong and they slide on the frequency- and time-axis directions separately.