# LibSVM Learner - Part I

• 2 min read

- Tutorials tags: [] meta: _aioseop_keywords: LibSVM, SVM, Machine Learning, Data Modeling, Neural Nets, Classification, Regression dsq_thread_id: '181042236' author:

I promised my readers that I would post about YALE/RapidMinerâ€™s LibSVM operator over a month ago. Unfortunately life had gotten in the way and Iâ€™m resorting to a multiple part series to just get the information out to you, so bear with me over the course of the next few days (or weeks) as I write about this exciting, powerful, and complicated learner.

First off, I use the LibSVM operator in YALE 3.4 occasionally, I use it to fool around with data sometimes and I rarely use it to build trading models. I prefer to use either the Gaussian Regression, Multilayer Preceptron, or IBk Learners for my time series data modeling. However, you could use the LibSVM learner for time series data modeling but I have found it more useful in analyzing non-time series data.

What the heck is SVM anyway? Wikipedia defines a SVM as a Support Vector Machine that is â€œa set of related supervised learning methods used for classification and regression.â€ Wikipedia continues to give a decent overview on how SVMâ€™s work:

A special property of SVMs is that they simultaneously minimize the empirical classification error and maximize the geometric margin; hence they are also known as **maximum margin classifiers**.

Support vector machines map input vectors to a higher dimensional space where a maximal separating hyperplane is constructed. Two parallel hyperplanes are constructed on each side of the hyperplane that separates the data. The separating hyperplane is the hyperplane that maximizes the distance between the two parallel hyperplanes. An assumption is made that the larger the margin or distance between these parallel hyperplanes the better the generalisation error of the classifier will be.

*the LibSVM*, was created by two researchers Chih-Chung Chang and Chih-Jen Lin at the National Science Council of Taiwan. YALE/RapidMiner packages their LibSVM learner into the nice operator you see to left of this paragraph.

What makes the LibSVM learner so appealing to us is that it can do 5 specialized tasks: it does 2 types of regression (epsilon-SVR, nu-SVR), 2 types of classification (C-SVC, nu-SVC), and something called one class SVM.

Iâ€™ll go into greater detail about each type of specialized task in part II of my LibSVM tutorial. If you want to learn more before then, visit Chang and Linâ€™s website for more details.