LIBLINEAR Experiments

Machine Learning Group at National Taiwan University

This page provides the source codes for the papers related to LIBLINEAR.


Release notes for version 2.42

See this file.

Release notes for version 2.40

See this file.

A study on truncated Newton methods for linear classification (after version 2.40)

Code for experiments of this paper

L. Galli and C.-J. Lin. A study on truncated Newton methods for linear classification IEEE Transactions on Neural Networks and Learning Systems, 2021 and supplementary materials can be found at this page.


Preconditioning in the Newton CG implementation (after version 2.20)

Code for experiments of this paper

C.-Y. Hsia, W.-L. Chiang, and C.-J. Lin. Preconditioned Conjugate Gradient Methods in Truncated Newton Frameworks for Large-scale Linear Classification . Asian Conference on Machine Learning (ACML), 2018 (best paper award) and supplementary materials can be found at this page


New trust-region update rule in the primal Newton solver

Code for experiments of this paper

C.-Y. Hsia, Y. Zhu, and C.-J. Lin. A study on trust region update rules in Newton methods for large-scale linear classification . Asian Conference on Machine Learning (ACML), 2017,

and supplementary materials can be found in this page


Automatic and efficient parameter selection

Details of the paper

B.-Y. Chu, C.-H. Ho, C.-H. Tsai, C.-Y. Lin, and C.-J. Lin. Warm Start for Parameter Selection of Linear Classifiers, ACM KDD 2015,

can be found in this page


Experiments on linear rankSVM

Programs used to generate experiment results in the paper

C.-P. Lee and C.-J. Lin. Large-scale Linear RankSVM. Technical report, 2013.

can be found in this tar.gz file.

Use files here only if you are interested in redoing our experiments. To apply the method for your applications, all you need is a LIBLINEAR extension. Check "Large-scale linear rankSVM" at LIBSVM Tools.


Experiments on linear support vector regression

Programs used to generate experiment results in the paper C.-H. Ho, and C.-J. Lin. Large-scale Linear Support Vector Regression. JMLR, 2012.

can be found in this zip file.


Experiments on linear classification when data cannot fit in memory

An algorithm in

H.-F. Yu, C.-J. Hsieh, K.-W. Chang, and C.-J. Lin, Large linear classification when data cannot fit in memory. ACM KDD 2010 (Best research paper award). Extended version appeared in ACM Transactions on Knowledge Discovery from Data, 5:23:1--23:23, 2012.

has been implemented as an extension of LIBLINEAR. It aims to handle data larger than your memory capacity. It can be found in LIBSVM Tools.

To repeat experiments in our paper, check this tgz file. Don't use it unlesse you want to regenerate figures. For you own experiments, you should use the LIBLINEAR extension at LIBSVM tools.


Experiments on Dual Logistic Regression and Maximum Entropy

Programs used to generate experiment results in the paper

Hsiang-Fu Yu, Fang-Lan Huang, and Chih-Jen Lin. Dual Coordinate Descent Methods for Logistic Regression and Maximum Entropy Models . Machine Learning, 85(2011), 41-75.

can be found in this zip file.


Comparing Large-scale L1-regularized Linear Classifiers

You can directly use LIBLINEAR for efficient L1-regularized classification. Use code here only if you are interested in redoing our experiments. The running time is long because we run each solver to accurately solve optimization problems.


Experiments on Degree-2 Polynomial Mappings of Data

Programs used to generate experiment results in Section 5 of the paper

Yin-Wen Chang, Cho-Jui Hsieh, Kai-Wei Chang, Michael Ringgaard and Chih-Jen Lin. Low-Degree Polynomial Mapping of Data for SVM, JMLR 2010,

can be found in this zip file.

Use files here only if you are interested in redoing our experiments. To apply the method for your applications, all you need is a LIBLINEAR extension. Check "fast training/testing of degree-2 polynomial mappings of data" at LIBSVM Tools.


Experiments on Maximum Entropy models

Programs used to generate experiment results in the paper

Fang-Lan Huang, Cho-Jui Hsieh, Kai-Wei Chang, and Chih-Jen Lin. Iterative Scaling and Coordinate Descent Methods for Maximum Entropy Models, JMLR 2010,

can be found in this zip file.


Comparing various methods for large-scale linear SVM

Programs used to generate experiment results in the paper

C.-J. Hsieh, K.-W. Chang, C.-J. Lin, S. Sundararajan, and S. Sathiya Keerthi. A Dual Coordinate Descent Method for Large-scale Linear SVM, ICML 2008,

can be found in this zip file.


Comparing various methods for large-scale linear SVM

Programs used to generate experiment results in the paper

K.-W. Chang, C.-J. Hsieh, and C.-J. Lin, Coordinate Descent Method for Large-scale L2-loss Linear SVM , JMLR 2008,

can be found in this zip file.


Comparing various methods for logistic regression

Programs used to generate experiment results in the paper

C.-J. Lin, R. C. Weng, and S. S. Keerthi. Trust region Newton method for large-scale logistic regression, JMLR 2008,

can be found in this zip file.

We include LBFGS and SVMlin (a modified version) for experiments. Please check their respective COPYRIGHT notices.


Please send comments and suggestions to Chih-Jen Lin.