Support Vector Machines (SVM) have received much attention in the past decade but mainly considering the case of full confidence in the training data. In this thesis we discuss two approaches for adapting the SVM to training data uncertainty, depending on two types of uncertainty: Gaussian noise for the input data and confidence level in the labelling of the data. We introduce two variants of the Fuzzy Support Vector Machine (FSVM) algorithm, where the basic idea is to assign weights to training samples depending on their position in the feature space, mapping the values of a function based on the Kernel Target Alignment (KTA) onto unit scale, using different types of mapping functions. We explain our choice for the generalized logistic function and also modify the FSVM algorithm to perform faster by reducing the dimension of the parameter search. In the second part, we extend the Total Support Vector Classification (TSVC) algorithm to training data with Gaussian noise. We detail the connection between the multivariate Gaussian distribution and the contour ellipsoids of its function, and use the ellipsoid as a sub-space within which we iteratively optimize the solution. We compare the different approaches with synthetic and real data sets and show that our approaches perform, to some extent, better, by yielding lower test misclassification rates, than the original SVM algorithm, and the original FSVM and TSVC algorithms.