LIBLINEAR Experiments

Machine Learning Group at National Taiwan University

This page provides the source codes for the papers related to LIBLINEAR.


Experiments on linear rankSVM

Programs used to generate experiment results in the paper C.-P. Lee and C.-J. Lin. Large-scale Linear RankSVM. Technical report, 2013.

can be found in this tar.gz file.

Use files here only if you are interested in redoing our experiments. To apply the method for your applications, all you need is a LIBLINEAR extension. Check "Large-scale linear rankSVM" at LIBSVM Tools.


Experiments on linear support vector regression

Programs used to generate experiment results in the paper C.-H. Ho, and C.-J. Lin. Large-scale Linear Support Vector Regression. JMLR, 2012.

can be found in this zip file.


Experiments on linear classification when data cannot fit in memory

An algorithm in

H.-F. Yu, C.-J. Hsieh, K.-W. Chang, and C.-J. Lin, Large linear classification when data cannot fit in memory. ACM KDD 2010 (Best research paper award). Extended version appeared in ACM Transactions on Knowledge Discovery from Data, 5:23:1--23:23, 2012.

has been implemented as an extension of LIBLINEAR. It aims to handle data larger than your memory capacity. It can be found in LIBSVM Tools.

To repeat experiments in our paper, check this tgz file. Don't use it unlesse you want to regenerate figures. For you own experiments, you should use the LIBLINEAR extension at LIBSVM tools.


Experiments on Dual Logistic Regression and Maximum Entropy

Programs used to generate experiment results in the paper

Hsiang-Fu Yu, Fang-Lan Huang, and Chih-Jen Lin. Dual Coordinate Descent Methods for Logistic Regression and Maximum Entropy Models . Machine Learning, 85(2011), 41-75.

can be found in this zip file.


Comparing Large-scale L1-regularized Linear Classifiers

You can directly use LIBLINEAR for efficient L1-regularized classification. Use code here only if you are interested in redoing our experiments. The running time is long because we run each solver to accurately solve optimization problems.


Experiments on Degree-2 Polynomial Mappings of Data

Programs used to generate experiment results in Section 5 of the paper

Yin-Wen Chang, Cho-Jui Hsieh, Kai-Wei Chang, Michael Ringgaard and Chih-Jen Lin. Low-Degree Polynomial Mapping of Data for SVM, JMLR 2010,

can be found in this zip file.

Use files here only if you are interested in redoing our experiments. To apply the method for your applications, all you need is a LIBLINEAR extension. Check "fast training/testing of degree-2 polynomial mappings of data" at LIBSVM Tools.


Experiments on Maximum Entropy models

Programs used to generate experiment results in the paper

Fang-Lan Huang, Cho-Jui Hsieh, Kai-Wei Chang, and Chih-Jen Lin. Iterative Scaling and Coordinate Descent Methods for Maximum Entropy Models, JMLR 2010,

can be found in this zip file.


Comparing various methods for large-scale linear SVM

Programs used to generate experiment results in the paper

C.-J. Hsieh, K.-W. Chang, C.-J. Lin, S. Sundararajan, and S. Sathiya Keerthi. A Dual Coordinate Descent Method for Large-scale Linear SVM, ICML 2008,

can be found in this zip file.


Comparing various methods for large-scale linear SVM

Programs used to generate experiment results in the paper

K.-W. Chang, C.-J. Hsieh, and C.-J. Lin, Coordinate Descent Method for Large-scale L2-loss Linear SVM , JMLR 2008,

can be found in this zip file.


Comparing various methods for logistic regression

Programs used to generate experiment results in the paper

C.-J. Lin, R. C. Weng, and S. S. Keerthi. Trust region Newton method for large-scale logistic regression, JMLR 2008,

can be found in this zip file.

We include LBFGS and SVMlin (a modified version) for experiments. Please check their respective COPYRIGHT notices.


Please send comments and suggestions to Chih-Jen Lin.