This page provides some miscellaneous tools based on LIBSVM (and LIBLINEAR). Roughly they include

- Things not general enough to be included in LIBSVM
- Research codes used in some our past papers
- Some data sets in LIBSVM formats

Disclaimer: We do not take any responsibility on damage or other problems caused by using these software and data sets.

LIBLINEAR for One-versus-one Multi-class Classification

Large-scale rankSVM

LIBLINEAR for more than 2^32 instances/features (experimental)

How large the training set should be?

Large linear classification when data cannot fit in memory

Weights for data instances

Fast training/testing for polynomial mappings of data

Cross Validation with Different Criteria (AUC, F-score, etc.)

Cross Validation using Higher-level Information to Split Data

LIBSVM for dense data

LIBSVM for string data

Multi-label classification

LIBSVM Extensions at Caltech

Feature selection tool

LIBSVM data sets

SVM-toy in 3D

Multi-class classification (and probability output) via error-correcting codes

SVM Multi-class Probability Outputs

An integrated development environment to libsvm

ROC Curve for Binary SVM

Grid Parameter Search for Regression

Radius Margin Bounds for SVM Model Selection

Reduced Support Vector Machines Implementation

LIBSVM for SVDD and finding the smallest sphere containing all data

DAG approach for multiclass classification

Please download the zip file. The usage is the same as LIBLINEAR except a new option "-i." Specify "-i model" to load a previously computed model for quickly training a slightly modified data set. Example:

> train -s 0 heart_scale.sample > train -s 0 -i heart_scale.sample.model heart_scaleAuthors: Cheng-Hao Tsai and Chieh-Yen Lin

Please download the zip file. The usage is the same as LIBLINEAR except a new option "-M." Specify "-M 1" to use one-versus-one multi-class classification. For example:

> train -M 1 datasetAuthors: Hsiang-Fu Yu, Chia-Hua Ho and Yu-Chin Juan

For linear, we extend LIBLINEAR to have methods proposed in the following paper

Ching-Pei Lee and Chih-Jen Lin.
Large-scale linear rankSVM. To appear in *Neural Computation*, 2014.

Please download the zip file. Details of using this code are in the README.ranksvm file. Except the new solver for rankSVM and the new data format supported in this extension, the usage is the same as LIBLINEAR.

For kernel rankSVM, we extend LIBSVM to have the method in

Please download the zip file. Details of using this code are in the README.ranksvm file.

Authors: Ching-Pei Lee and Tzu-Ming Kuo

Author: Yu-Chin Juan

To use, put the code under the compiled liblinear/matlab directory, and open octave or matlab:

> [y,x] = libsvmread('mydata'); > size_acc(y,x);Currently, only

Examples:

Author: Po-Wei Wang

This code implements methods proposed in the following papers

- Hsiang-Fu Yu, Cho-Jui Hsieh, Kai-Wei Chang, and Chih-Jen Lin. Large linear classification when data cannot fit in memory, ACM Transactions on Knowledge Discovery from Data, 5:23:1--23:23, 2012. Preliminary version at ACM KDD 2010 (Best research paper award).
- Kai-Wei Chang and Dan Roth. Selective Block Minimization for Faster Convergence of Limited Memory Large-scale Linear Models, ACM KDD 2011.

Please download the zip file. Details of using this code are in the README.cdblock file. Except new parameters for this extension, the usage is the same as LIBLINEAR.

Authors: Hsiang-Fu Yu and Kai-Wei Chang

For LIBSVM users, please download the zip file (MATLAB and Python interfaces are included).

For LIBLINEAR users, please download the zip file (MATLAB and Python interfaces are included).

- You must store weights in a separated file and specify -W your_weight_file. This setting is different from earlier versions where weights are in the first column of training data.
- Training/testing sets are the same as those for standard LIBSVM/LIBLINEAR.
- We do not support weights for test data.
- All solvers are supported.
- Matlab/Python interfaces for both LIBSVM/LIBLIENAR are supported.

Author: Ming-Wei Chang, Hsuan-Tien Lin, Ming-Hen Tsai, Chia-Hua Ho and Hsiang-Fu Yu.

The implementation is baed on one method proposed in the paper

Yin-Wen Chang, Cho-Jui Hsieh, Kai-Wei Chang, Michael Ringgaard, and
Chih-Jen Lin.
Low-degree Polynomial Mappings of Data for SVM, 2009.

Please download the zip file here. Details of using this code are in the README.poly2 file. Except new parameters for the degree-2 mapping, the usage is the same as LIBLINEAR.

Authors: Yin-Wen Chang, Cho-Jui Hsieh, Kai-Wei Chang, and Yu-Chin Juan

For some unbalanced data sets, accuracy may not be a good criterion for evaluating a model. This tool enables LIBSVM and LIBLINEAR to conduct cross-validation and prediction with respect to different criteria (F-score, AUC, etc.).

DetailsAuthors: Hsiang-Fu Yu, Chia-Hua Ho, and Cheng-Hao Tsai

Assume you have 20,000 images of 200 users:

- User 1: 100 images
- ...
- User 200: 100 images

Author: Ming-Fang Weng

Author: Guo-Xun Yuan

Usage: ./fselect.py training_file [testing_file]Output files: .fscore shows importance of features, .select gives the running log, and .pred gives testing results.

More information about this implementation can be found in Y.-W. Chen and C.-J. Lin, Combining SVMs with various feature selection strategies. To appear in the book "Feature extraction, foundations and applications." 2005. This implementation is still preliminary. More comments are very welcome.

Author: Yi-Wei Chen

A simple applet demonstrating SVM classification and regression in 3D. It extends the java svm-toy in the LIBSVM package.

**Note: libsvm does support multi-class classification.**
The code here implements some extensions for
experimental purposes.

This code implements multi-class classification and probability estimates using 4 types of error correcting codes. Details of the 4 types of ECCs and the algorithms can be found in the following paper:

T.-K. Huang,
R. C. Weng,
and
**C.-J. Lin**.
Generalized Bradley-Terry Models and Multi-class Probability Estimates.
*Journal
of Machine Learning Research*, 7(2006), 85-115.
A (very) short version of this paper appears in
NIPS 2004.

The code can be downloaded here. The installation is the same as the standard LIBSVM package, and different types of ECCs are specified as the "-i" option. Type "svm-train" without any arguments to see the usage. Note that both "one-againse-one" and "one-against-the rest" multi-class strategies are part of the implementation.

If you specify -b in training and testing, you get probability estimates and the predicted label is the one with the largest value. If you do not specify -b, this is classification based on decision values. Now we use the "exponential-loss" method in the paper:

Allwein et al.: Reducing multiclass to binary: a unifying approach for margin classifiers. Journal of Machine Learning Research, 1:113--141, 2001,

to predict class label. For one-against-the rest
(or called 1vsall), this is the same as the commonly
used way

argmax_{i} (decision value of ith class vs the rest).

For one-against-one, it is different from the
max-win strategy used in libsvm.

MATLAB code for experiments in our paper is available here

Author: Tzu-Kuo Huang

T.-F. Wu, C.-J. Lin, and R. C. Weng. Probability Estimates for Multi-class Classification by Pairwise Coupling. Journal of Machine Learning Research, 2004. A short version appears in NIPS 2003.

After libsvm 2.6, it already includes one of the methods here. You may directly use the standard libsvm unless you are interested in doing comparisons. Please download the tgz file here. The data used in the paper is available here. Please then check README for installation.

Matlab programs for the synthetic data experiment in the paper can be found in this directory. The main program is fig1a.m

Author: Tingfan Wu (svm [at] future.csie.org)

Author: Chih-Chung Chang

This tool which gives the ROC (Receiver Operating Characteristic) curve and AUC (Area Under Curve) by ranking the decision values. Note that we assume labels are +1 and -1. Multi-class is not supported yet.

You can use either MATLAB or Python.

If using MATLAB, you need to

- Download LIBSVM MATLAB interface from LIBSVM page and build it.
- Download plotroc.m to the main directory of LIBSVM MALTAB interface.
- Type
> help plotroc

If using Python, you need to

- Download LIBSVM (version 2.91 or after) and make the LIBSVM python interface.
- Download plotroc.py to the python directory.
- Edit the path of gnuplot in plotroc.py in necessary.
- The usage is
plotroc.py [-v cv_fold | -T testing_file] [libsvm_options] training_file

- Example:
> plotroc.py -v 5 -c 10 ../heart_scale

To use LIBLINEAR, you need the following modifications

- MATLAB: Copy plotroc.m to the matlab directory (note that matlab interface is included in LIBLINEAR). Replace svmtrain and svmpredict with train and predict, respectively.

Authors: Tingfan Wu (svm [at] future.csie.org), Chien-Chih Wang (d98922007 [at] ntu.edu.tw), and Hsiang-Fu Yu

Usage: grid.py [-log2c begin,end,step] [-log2g begin,end,step] [-log2p begin,end,step] [-v fold] [-svmtrain pathname] [-gnuplot pathname] [-out pathname] [-png pathname] [additional parameters for svm-train] dataset

Author: Hsuan-Tien Lin (initial modification); Tzu-Kuo Huang (the parameter epsilon); Wei-Cheng Chang.

Author: Wei-Chun Kao with the help from Leland Wang, Kai-Min Chung, and Tony Sun

Please download the .tgz file here. After making the binary files, type svm-train to see the usage. It includes different methods to implement RSVM.

To speed up the code, you may want to link the code to optimized BLAS/LAPACK or ATLAS.

Author: Kuan-Min Lin

- -s 5 SVDD
- -s 6 gives the square of the radius for L1-loss SVM
- -s 7 gives the square of the radius for L2-loss SVM

For details of our SVDD formulation and implementation, please see W.-C. Chang, C.-P. Lee, and C.-J. Lin A Revisit to Support Vector Data Description (SVDD). Technical report 2013. Note that you should choose a value C in [1/l, 1], where l is the number of data. Models with C>1 are the same, and so are models with C<1/l.

For details of the square of the radius, please see K.-M. Chung, W.-C. Kao, C.-L. Sun, L.-L. Wang, and C.-J. Lin. Radius Margin Bounds for Support Vector Machines with the RBF Kernel Neural Computation, 15(2003), 2643-2681.

Authors: Leland Wang, Holger Froehlich (University of Tuebingen), Konrad Rieck (Fraunhofer institute), Chen-Tse Tsai, Tse-Ju Lin, Wei-Cheng Chang, Ching-Pei Lee

double pred_result = svm_predict_values(model, x, dec_values); free(dec_values); return pred_result;with this segment of code.

This follows from the code used in the paper:
C.-W. Hsu and C.-J. Lin.
A comparison of methods
for multi-class support vector machines
,
*IEEE Transactions on Neural Networks*, 13(2002), 415-425.

Author: Chih-Wei Hsu

Please contact Chih-Jen Lin for any question.