Home->Slides->ICANN 2001

Tutorial on Support Vector Machines (Slides)

ICANN 2001, Vienna, Austria, August 21, 2001

Alex Smola
RSISE, Machine Learning Group, Australian National University, Canberra


Support Vector Machines and related Bayesian kernel methods such as Gaussian Processes or the Relevance Vector Machines have been deployed successfully in classification and regression tasks. They work by mapping the data into a high-dimensional feature space and compute linear functions on the features. This has the appeal of being easily accessible to optimization and theoretical analysis.

The algorithmic advantage is that the optimization problems resulting from Support Vector Machines have a global minimum and that they can be solved with standard quadratic programming tools. Furthermore, the parametrization of kernel methods tends to be rather intuitive for the user.

In this tutorial, I will introduce the basic theory of Support Vector Machines and some recent extensions. Moreover, I will present a few simple algorithms to solve the optimization problems in practice.

Outline of the Tutorial

Linear Estimators



Bayesian Methods