The standard support vector machine (SVM) minimizes the hinge loss function subject to the L-2 penalty or the roughness penalty. Recently, the L-1 SVM was suggested for variable selection by producing sparse solutions [Bradley, P., Mangasarian, O., 1998. Feature selection via concave minimization and support vector machines. In: Shavlik, J. (Ed.), ICML'98. Morgan Kaufmann, Los Altos, CA; Zhu, J., Hastie, T., Rosset, S., Tibshirani, R., 2003. 1-norm support vector machines. Neural Inform. Process. Systems 16]. These learning methods are non-adaptive since their penalty forms are pre-determined before looking at data, and they often perform well only in a certain type of situation. For instance, the L-2 SVM generally works well except when there are too many noise inputs, while the L-1 SVM is more preferred in the presence of many noise variables. In this article we propose and explore an adaptive learning procedure called the L-q SVM, Where the best q > 0 is automatically chosen by data. Both two- and multi-class classification problems are considered. We show that the new adaptive approach combines the benefit of a class of non-adaptive procedures and gives the best performance of this class across a variety of situations. Moreover, we observe that the proposed L-q penalty is more robust to noise variables than the L-1 and L-2 penalties. An iterative algorithm is suggested to solve the L-q SVM efficiently. Simulations and real data applications support the effectiveness of the proposed procedure. (C) 2007 Elsevier B.V. All rights reserved.