NIPS 1997 Workshop

Support Vector Machines


The Support Vector (SV) learning algorithm (Boser, Guyon, Vapnik, 1992; Cortes, Vapnik, 1995; Vapnik, 1995) provides a general method for solving Pattern Recognition, Regression Estimation and Operator Inversion problems. The method is based on results in the theory of learning with finite sample sizes. The last few years have witnessed an increasing interest in SV machines, due largely to excellent results in pattern recognition, regression estimation and time series prediction experiments. The purpose of this workshop is (1) to provide an overview of recent developments in SV machines, ranging from theoretical results to applications, (2) to explore connections with other methods, and (3) to identify weaknesses, strengths and directions for future research for SVMs. We invite contributions on SV machines and related approaches, looking for empirical support wherever possible.


  • SV Applications

  • Benchmarks

  • SV Optimization and implementation issues

  • Theory of generalization and regularization

  • Learning methods based on Hilbert-Schmidt kernels (e.g. kernel PCA)

  • Links to related methods and concepts (e.g. boosting, fat shattering)

  • Representation of functions in SV machines (e.g. splines, anova)


The workshop was held in Breckenridge, Colorado, on December 6, 1997.