Home->Slides->ISCAS 2001

Tutorial on Support Vector Machines (Slides)

ISCAS 2001, Sydney, Australia, May 7, 2001

Abstract

Support Vector Machines and related Bayesian kernel methods such as Gaussian Processes or the Relevance Vector Machines have been deployed successfully in classification and regression tasks. They work by mapping the data into a high-dimensional feature space and compute linear functions on the features. This has the appeal of being easily accessible to optimization and theoretical analysis.

The algorithmic advantage is that the optimization problems resulting from Support Vector Machines have a global minimum and that they can be solved with standard quadratic programming tools. Furthermore, the parametrization of kernel methods tends to be rather intuitive for the user.

In this tutorial, I will introduce the basic theory of Support Vector Machines and some recent extensions. Moreover, I will present a few simple algorithms to solve the optimization problems in practice.

Outline of the Tutorial

Linear Estimators

Kernels

Optimization

Bayesian Methods

Relevant Links