Your comments to our work are very welcome. These files may be
slightly different from the published form because of
formatting issues and errorfixing after the publication.

Z. Que and C.J. Lin.
Oneclass SVM probabilistic outputs. Technical report, 2022.
(experimental code).
 Y. Liu, J.N. Yen, B. Yuan, R. Shi, P. Yan, and ChihJen Lin.
Practical counterfactual policy learning for topK recommendations.
ACM KDD 2022.
Supplementary materials and experimental code are available
here.
 S.A. Chen, J.J. Liu, T.H. Yang, H.T. Lin, and ChihJen Lin.
Even the simplest baseline needs careful reinvestigation: a case study on XMLCNN.
NAACL 2022.
(supplementary materials included in the paper pdf file; experimental code; data).
 L.C. Lin, C.H. Liu, C.M. Chen, K.C. Hsu, I.F. Wu, M.F. Tsai, and ChihJen Lin.
On the use of unrealistic predictions in hundreds of papers evaluating graph representations.
AAAI 2022.
Supplementary materials and experimental code are available
here.
 J.N. Yen and ChihJen Lin.
Limitedmemory Commondirections Method With Subsampled Newton Directions for Largescale Linear Classification.
ICDM 2021.
Supplementary materials and experimental code are available
here.

B. Yuan, Y.S. Li, P. Quan and ChihJen Lin.
Efficient optimization methods for extreme similarity learning with nonlinear embeddings.
ACM KDD 2021.
Supplementary materials and experimental code are available
here. Note that netflix data
is not available as we are not allowed to redistribute the set.

J.J. Liu, T.H. Yang, S.A. Chen and ChihJen Lin.
Parameter selection: why we should pay more attention to it.
Proceedings of the 59th Annual Meeting of the
Association of Computational Linguistics (ACL), 2021 (short paper).
Supplementary materials and experimental code are available
here.

G. Galvan, M. Lapucci, C.J. Lin and M. Sciandrone.
A twolevel decomposition framework exploiting first and second order information for SVM training problems.
Journal of Machine Learning Research, 22(23):138, 2021.

C.P. Lee, P.W. Wang
and C.J. Lin.
Limitedmemory commondirections method for largescale
optimization: convergence, parallelization, and distributed
optimization.
Mathematical Programming Computation, 2022.
Code for experiments.
Implementations available in Distributed LIBLINEAR.

B. Yuan, Y. Liu, J.Y. Hsia, Z. Dong, and C.J. Lin.
Unbiased Ad click prediction for positionaware advertising systems
ACM Recommender Systems, 2020.
Supplementary materials are in the end of the paper file
and experimental code is available.

L. Galli and C.J. Lin.
A study on truncated Newton methods for linear classification
IEEE Transactions on Neural Networks and Learning Systems, 33:28282841, 2022.
(supplementary materials and code for experiments.)
Implementation available in
LIBLINEAR (after version 2.40).

H.Y. Chou, P.Y. Lin, and C.J. Lin.
Dual coordinatedescent methods for linear oneclass SVM and SVDD
SIAM International Conference on Data Mining, 2020.
(Supplementary materials and code for experiments).
Implementation available in
LIBLINEAR (after version 2.40).

J.Y. Hsia and C.J. Lin.
Parameter selection for linear support vector regression. IEEE Transactions on Neural Networks and Learning Systems, 31:56395644 (2020).
(details and supplementary materials/exp code).
Implementation available in
LIBLINEAR (after version 2.30).

C.C. Wang, K.L. Tan, and C.J. Lin.
Newton methods for convolutional neural networks.
ACM Transactions on Intelligent
Systems and Technology, 11:19:119:30, 2020.
(supplementary materials,
code).

C.C. Chiu, P.Y. Lin, and C.J. Lin.
Twovariable dual coordinate descent methods for linear SVM with/without the bias term.
SIAM International Conference on Data Mining, 2020.
(Supplementary materials and code for experiments)

B. Yuan, J.Y. Hsia, M.Y. Yang, H. Zhu, C. Chang, Z. Dong,
and C.J. Lin.
Improving Ad click prediction by considering nondisplayed events.
ACM International Conference on Information and Knowledge Management (CIKM) 2019.
Supplementary materials are in the end of the paper file
and experimental code.

B. Yuan, M.Y. Yang, J.Y. Hsia, H. Zhu, Z. Liu, Z. Dong,
and C.J. Lin.
Oneclass fieldaware factorization machines for recommender systems with implicit feedbacks. Technical report, 2019.
Supplementary materials are in the end of the paper file
and experimental code.

C.Y. Hsia, W.L. Chiang, and C.J. Lin.
Preconditioned conjugate gradient methods in truncated Newton frameworks for largescale linear classification
.
Asian Conference on Machine Learning (ACML), 2018 (best paper award).
Implementation available in
LIBLINEAR (after version 2.20).
Supplementary materials
and code for experiments. Proof of the main theorem has been updated after the publication of the paper

W.L. Chiang, Y.S. Li, C.P. Lee, and C.J. Lin.
Limitedmemory commondirections method for distributed L1regularized linear classification
SIAM International Conference on Data Mining, 2018.
Supplementary materials and code for experiments.
Implementations available in Distributed LIBLINEAR.

C.C. Wang, K. L. Tan, C.T. Chen, Y.H. Lin, S. S. Keerthi, D. Mahajan, S. Sundararajan, and C.J. Lin.
Distributed Newton methods for deep neural networks
.
Neural Computation,
30:16731724, (2018).
Supplement and code for paper's experiments).

Y. Zhuang, Y.C. Juan, G.X. Yuan, and C.J. Lin.
Naive parallelization of coordinate descent methods and an application on multicore L1regularized classification
.
ACM International Conference on Information and Knowledge Management (CIKM) 2018 (Supplementary materials,
code for paper's experiments).

C.Y. Hsia, Y. Zhu, and C.J. Lin.
A study on trust region update rules in Newton methods for largescale linear classification
.
Asian Conference on Machine Learning (ACML), 2017.
Implementation available in
LIBLINEAR (after version 2.11).
Supplementary materials and experimental code

H.F. Yu, H.Y. Huang, I. S. Dihillon, and C.J. Lin.
A unified algorithm for oneclass structured matrix factorization with side information
.
AAAI 2017. Supplementary materials are in the end of the paper file
and experimental code.

C.P. Lee, P.W. Wang, W. Chen, and C.J. Lin.
Limitedmemory commondirections method for
distributed optimization and its application on
empirical risk minimization
.
SIAM International Conference on Data Mining, 2017.
Supplementary materials and experimental code.
Implementations available in Distributed LIBLINEAR.

W.S. Chin, B.W. Yuan, M.Y. Yang, and
C.J. Lin.
An efficient alternating Newton method for learning factorization machines.
ACM Transactions on
Intelligent Systems and Technology,
9:72:172:31, 2018.
Software package, supplementary materials, and experimental code

W.S. Chin, B.W. Yuan, M.Y. Yang, Y. Zhuang, Y.C. Juan, and
C.J. Lin.
LIBMF: A library for parallel matrix factorization in sharedmemory systems
Journal
of Machine Learning Research,
17(86):15, (2016).
Supplementary materials.

Y.C. Juan, W.S. Chin, Y. Zhuang, and C.J. Lin.
Fieldaware factorization machines for CTR prediction,
ACM Recommender Systems, 2016.
Due to the change of some data settings, experimental results have been updated
here and are different from those in the published proceedings.
Implementation available in LIBFFM package.
Experimental code.

W.L. Chiang, M.C. Lee, and C.J. Lin.
Parallel dual coordinate descent method for largescale linear classification in multicore environments,
ACM KDD 2016
(Implementation and supplementary materials available in
Multicore LIBLINEAR).

P.W. Wang, C.P. Lee, and C.J. Lin.
The commondirections method for regularized empirical risk minimization
.
Journal
of Machine Learning Research,
20(58):149, 2019.
(Supplementary materials and experimental code).

H.F. Yu, M. Bilenko, and C.J. Lin.
Selection of negative samples for oneclass matrix factorization.
SIAM International Conference on Data Mining, 2017.
(supplementary materials included in the paper pdf file; experimental code).

H.Y. Huang and C.J. Lin.
Linear and kernel classification: when to use which?,
SIAM International Conference on Data Mining, 2016.
(supplementary materials and experimental code).

M.C. Lee, W.L. Chiang, and C.J. Lin.
Fast matrixvector multiplications for largescale logistic regression on sharedmemory systems,
ICDM 2015.
(Supplementary materials,
Implementation available in
Multicore LIBLINEAR).

B.Y. Chu, C.H. Ho, C.H. Tsai, C.Y. Lin, and C.J. Lin.
Warm start for parameter selection of linear classifiers,
ACM KDD 2015.
(Implementation available in
LIBLINEAR; see details and supplementary materials).

W.S. Chin, Y. Zhuang, Y.C. Juan, and C.J. Lin.
A learningrate schedule for stochastic
gradient methods to matrix factorization, PAKDD, 2015.

Y. Zhuang, W.S. Chin, Y.C. Juan, and C.J. Lin. Distributed Newton method for regularized logistic regression, PAKDD 2015.
Implementations available in Distributed LIBLINEAR.

C.C. Wang, C.H. Huang, and
C.J. Lin.
Subsampled Hessian Newton methods for supervised learning.
Neural Computation, 27(2015), 17661795.
(supplementary materials,
code for experiments
)

C.Y. Lin, C.H. Tsai, C.P. Lee, and
C.J. Lin.
Largescale logistic regression and linear support vector machines using Spark.
IEEE International Conference on Big Data, 2014.
(see Distributed LIBLINEAR
and supplementary materials)

C.H. Tsai, C.Y. Lin, and
C.J. Lin.
Incremental and decremental training for linear classification
ACM KDD 2014.
(see Extension of LIBLINEAR,
supplementary materials, and experiment code).

T.M. Kuo, C.P. Lee and
C.J. Lin.
Largescale Kernel RankSVM.
SIAM International Conference on Data Mining, 2014.
(supplementary materials,
code
for experiments in the paper).

W.S. Chin, Y. Zhuang, Y.C. Juan, and C.J. Lin.
A Fast Parallel Stochastic Gradient Method for Matrix Factorization in Shared Memory Systems.
ACM Transactions on Intelligent
Systems and Technology, 6:2:12:24, 2015.
(Implementation available in
LIBMF, code for experiments in the paper).
A preliminary version appeared at
Proceedings of the ACM Recommender Systems, 2013
and received
best paper award (talk slides).

W.C. Chang, C.P. Lee, and C.J. Lin.
A revisit to support vector data description (SVDD).
Technical report, 2013.

P.W. Wang and
C.J. Lin.
Iteration Complexity of Feasible Descent Methods for Convex Optimization.
Journal
of Machine Learning Research,
15(2014), 15231548.

C.P. Lee and
C.J. Lin.
Largescale Linear RankSVM
.
Neural Computation, 26(2014), 781817.
(supplementary materials,
LIBLINEAR extension for ranking code,
code
for experiments in the paper).

H.F. Yu, C.H. Ho, Y.C. Juan, and
C.J. Lin.
LibShortText: a library for shorttext classification and analysis
.
Technical report, 2013.
LibShortText software.

C.P. Lee, and
C.J. Lin.
A Study on L2Loss (Squared HingeLoss) MultiClass SVM.
Neural Computation, 25(2013), 13021323. (code).

C.H. Ho, and
C.J. Lin.
Largescale Linear Support Vector Regression.
Journal
of Machine Learning Research,
13(2012), 33233348.
(Implementation available in
LIBLINEAR, code for experiments in the paper)

G.X. Yuan,
C.H. Ho, and
C.J. Lin.
Recent Advances of Largescale Linear Classification.
Proceedings of the IEEE,
100(2012), 25842603.
This is a survey paper and we plan to keep updating it.
Your comments are very welcome.

C.C. Chang and
C.J. Lin.
LIBSVM
: a library for support vector machines.
ACM Transactions on Intelligent
Systems and Technology, 2:27:127:27, 2011. This
LIBSVM implementation document was created in 2001
and since then has been actively maintained/updated.
pdf,
ps.gz,
ACM Digital lib,
LIBSVM page

G.X. Yuan,
C.H. Ho, and
C.J. Lin.
An Improved GLMNET for L1regularized Logistic Regression and Support
Vector Machines.
Journal
of Machine Learning Research,
13(2012), 19992030.
A short version appears at ACM KDD 2011.
(Implementation available in
LIBLINEAR,
supplementary materials, code for the paper).

C.H. Ho, M.H. Tsai, and
C.J. Lin.
Active Learning and Experimental Design with SVMs
.
JMLR Workshop and Conference Proceedings: Workshop on Active Learning and Experimental Design
16(2011), 7184. Code.

H.F. Yu,
C.J. Hsieh,
K.W. Chang,
and
C.J. Lin,
Large linear classification when data cannot fit in memory.
ACM Transactions on Knowledge Discovery from Data, 5:23:123:23, 2012.
A preliminary version appeared at ACM KDD 2010 and received
best research paper award.
Code. Slides. Discussion and FAQ. Video,
ACM Digital lib

H.F. Yu,
F.L. Huang, and
C.J. Lin.
Dual coordinate descent methods for logistic regression and
maximum entropy models
.
Machine Learning, 85(2011), 4175.
(code)

R. C. Weng
and C.J. Lin.
A Bayesian approximation method for online ranking.
Journal
of Machine Learning Research,
12(2011), 267300.
Code.

G.X. Yuan,
K.W. Chang, C.J. Hsieh, and
C.J. Lin.
A comparison of optimization methods
and software
for largescale L1regularized linear classification.
(supplementary materials, code).
Journal
of Machine Learning Research, 11(2010), 31833234.

Y.W. Chang, C.J. Hsieh,
K.W. Chang, M. Ringgaard, and
C.J. Lin.
Training and Testing Lowdegree Polynomial Data Mappings via Linear SVM.
Journal
of Machine Learning Research, 11(2010), 14711490.
(Extension of LIBLINEAR,
code for experiments in the paper)

F.L. Huang, C.J. Hsieh,
K.W. Chang, and
C.J. Lin.
Iterative scaling and coordinate descent methods for maximum entropy models,
Journal
of Machine Learning Research
11(2010), 815848.
A brief version appears
at ACL 2009 (short paper).
(code for
experiments in the paper)

R.E. Fan, K.W. Chang, C.J. Hsieh, X.R. Wang, and
C.J. Lin.
LIBLINEAR: A library for large linear classification
.
Journal
of Machine Learning Research
9(2008), 18711874.
Note that we include
some implementation details in the appendices
of this paper. (LIBLINEAR page)

Y.W. Chang and
C.J. Lin.
Feature Ranking Using Linear SVM
.
JMLR Workshop and Conference Proceedings: Causation and Prediction Challenge (WCCI 2008)
3(2008), 5364.
Code.

W.Y. Chen, Y. Song, H. Bai,
C.J. Lin,
and E. Y. Chang.
Parallel Spectral Clustering in Distributed Systems.
IEEE Transactions on Pattern Analysis and Machine Intelligence, (33)2011, 568586.
A short version appears at ECML/PKDD 2008.
Code

S. S. Keerthi,
S. Sundararajan.
K.W. Chang,
C.J. Hsieh,
and
C.J. Lin,
A sequential dual method for large scale multiclass linear SVMs
.
ACM KDD 2008.

C.J. Hsieh
K.W. Chang,
C.J. Lin,
S. S. Keerthi,
and
S. Sundararajan.
A Dual Coordinate Descent Method For LargeScale Linear
SVM.
ICML 2008.
(Implementation available in
LIBLINEAR,
code for experiments in the paper).

K.W. Chang,
C.J. Hsieh, and
C.J. Lin.
Coordinate Descent Method for Largescale L2loss Linear SVM.
Journal
of Machine Learning Research
9(2008), 13691398.
(code)

R.E. Fan and C.J. Lin.
A Study on Threshold Selection for Multilabel Classification
, 2007.

C.J. Lin,
R. C. Weng,
and
S. S. Keerthi.
Trust region Newton method for largescale logistic
regression.
Journal
of Machine Learning Research
9(2008), 627650.
A short version appears
in ICML 2007.
Software available at
liblinear.

L. Bottou and
C.J. Lin.
Support Vector Machine Solvers.
In Large Scale Kernel Machines, L. Bottou, O. Chapelle, D. DeCoste, and J. Weston editors, 128, MIT Press, Cambridge, MA., 2007.

T.K. Huang,
C.J. Lin,
and
R. C. Weng.
Ranking Individuals by group comparisons.
Journal
of Machine Learning Research,
9(2008), 21872216.
A short version appears in ICML 2006.
Download data used in this work.

C.J. Lin.
On the Convergence of
Multiplicative Update Algorithms for
Nonnegative Matrix Factorization.
IEEE Transactions on Neural Networks, 18(2007), 15891596.

C.J. Lin.
Projected
gradient methods for nonnegative matrix factorization.
Neural Computation, 19(2007), 27562779.

R.E. Fan, P.H. Chen, and C.J. Lin.
Working Set Selection Using Second Order Information for Training SVM.
Journal
of Machine Learning Research, 6(2005), 18891918.

P.H. Chen, R.E. Fan, and C.J. Lin.
A Study on SMOtype Decomposition Methods for Support Vector Machines
. IEEE Transactions on Neural Networks, 17(2006), 893908.

Y.W. Chen and C.J. Lin,
Combining SVMs with various feature selection strategies.
in the book
"Feature extraction, foundations and applications." Springer, 2006.

T.K. Huang,
R. C. Weng,
and
C.J. Lin.
Generalized BradleyTerry Models and Multiclass Probability Estimates.
Journal
of Machine Learning Research, 7(2006), 85115.
A (very) short version of this paper appears in
NIPS 2004.

T.F. Wu,
C.J. Lin, and
R. C. Weng.
Probability Estimates for Multiclass Classification by Pairwise Coupling.
Journal
of Machine Learning Research, 5(2004), 9751005.
A short version appears in
NIPS 2003.

C.W. Hsu, C.C. Chang,
C.J. Lin.
A practical guide to support vector classification
.
Technical report, Department of Computer
Science, National Taiwan University.
July, 2003.

M.W. Chang and C.J. Lin.
Leaveoneout Bounds for Support Vector
Regression Model Selection.
Neural Computation, 17(2005), 11881222.

H.T. Lin,
C.J. Lin, and
R. C. Weng.
A note on Platt's probabilistic outputs for support vector machines.
Machine Learning, 68(2007), 267276.

P.H. Chen,
C.J. Lin,
and
B. Schölkopf.
A tutorial on nusupport vector machines.
Applied Stochastic Models in Business and Industry
, 21(2005), 111136.

H.T. Lin
and
C.J. Lin.
A study on sigmoid kernels for SVM and the training
of nonPSD Kernels by
SMOtype methods.
March 2003.

W.C. Kao,
K.M. Chung,
T. Sun,
and
C.J. Lin.
Decomposition Methods for Linear Support Vector Machines.
Neural Computation,
16(2004), 16891704.

B.J. Chen, M.W. Chang, and
C.J. Lin.
Load Forecasting Using Support Vector Machines:
A Study on EUNITE Competition 2001.
IEEE Transactions on Power Systems.
19(2004), 18211830.

K.M. Chung, W.C. Kao,
C.L. Sun, L.L. Wang,
and
C.J. Lin.
Radius Margin Bounds for Support Vector Machines with the RBF Kernel
.
Neural Computation,
15(2003), 26432681.

S. S. Keerthi
and
C.J. Lin.
Asymptotic behaviors of support vector machines with
Gaussian kernel
.
Neural Computation, 15(2003), 16671689.

M.W. Chang, C.J. Lin, and
R. C. Weng.
Analysis of nonstationary time series using
support vector machines
April 2002.

K.M. Lin and C.J. Lin
A study on reduced support vector machines.
IEEE Transactions on Neural Networks, 14(2003), 14491559.

M.W. Chang, C.J. Lin, and
R. C. Weng.
Analysis of switching dynamics with
competing support vector machines
Proceedings of IJCNN, May 2002.
Extended version is
here.
IEEE Transactions on Neural Networks, 15(2004), 720727.

M.W. Chang, B.J. Chen and C.J. Lin.
EUNITE Network Competition:
Electricity Load Forecasting
, November 2001.
Winner of
EUNITE
world wide
competition
on electricity load prediction.

C.J. Lin.
Linear convergence of a
decomposition method for support
vector machines
, November 2001.

C.J. Lin.
Asymptotic convergence of an SMO algorithm without any assumptions
.
IEEE Transactions on Neural Networks 13(2002), 248250.

C.C. Chang and C.J. Lin.
IJCNN 2001 Challenge: Generalization Ability and
Text Decoding
,
Proceedings of IJCNN, July 2001. Winner of
IJCNN Challenge.

C.J. Lin.
A Formal Analysis of Stopping Criteria of
Decomposition Methods for Support
Vector Machines
,
IEEE Transactions on Neural Networks, 13(2002), 10451052.

C.W. Hsu and C.J. Lin.
A comparison of methods
for multiclass support vector machines
,
IEEE Transactions on Neural Networks, 13(2002), 415425.

C.C. Chang and C.J. Lin.
Training
nusupport vector regression:
theory and algorithms
,
Neural Computation, 14(2002), 19591977.
Implementation available in
libsvm
.

S.P. Liao, H.T. Lin, and
C.J. Lin.
A note on
the decomposition methods
for support vector regression
.
Neural Computation, 14(2002), 12671281.

J.H. Lee and
C.J. Lin.
Automatic model selection for support vector machines
, November 2000.
Implementation available in
looms
.

C.J. Lin.
On the convergence
of the decomposition method for
support vector machines
,
IEEE Transactions on Neural Networks 12(2001), 12881298.

C.W. Hsu and C.J. Lin.
A simple decomposition method for
support vector machines
,
Machine Learning 46(2002), 291314.
Implementation available in
bsvm
.

C.C. Chang and C.J. Lin.
Training
nuSupport Vector Classifiers:
Theory and Algorithms
,
Neural
Computation 13(9), 2001, 21192147.
Implementation available in
libsvm
.

C.J. Lin.
Formulations of support vector machines: a note from an optimization point of view
.
Neural Computation 13(2) 2001, 307317.
 C.C. Chang, C.W. Hsu, and
C.J. Lin.
The analysis of decomposition methods for support vector machines.
in
Proceeding of the
Workshop on Support Vector Machines,
Sixteenth International Joint Conference on
Artificial Intelligence
(IJCAI 99).
Extended version appears in
IEEE Transactions on Neural Networks, 11(2000), 10031008.
 C.J. Lin, and R.
Saigal. An incomplete Cholesky factorization for
dense matrices.
BIT,
40(2000), 536558.
 C.J. Lin, and J. J. More'
.
Newton's method for large boundconstrained optimization problems.
SIAM Journal on
Optimization, 9(1999), 11001127.
(code)

S.Y. Wu,
S.C. Fang
and C.J. Lin,
Solving the General Capacity
Problem.
Annals of Operations
Research.
103(2001), 193211.

S.C. Fang, C.J. Lin, and S.Y. Wu
Relaxations of the cutting plane method for
quadratic semiinfinite programming.
Journal of Computational
and Applied Mathematics
, 129(2001), 89104.
 C.J. Lin, and J. J. More'
. Incomplete
Cholesky Factorizations with Limited Memory.
SIAM Journal on
Scientific Computing, 21(1999), 2445.
 C.J. Lin,
Preconditioning Dense Linear Systems from
LargeScale Semidefinite Programming Problems.
In Proceeding of the
Fifth Copper Mountain
Conference on Iterative Methods.
1998.
Second prize of the student paper competition.

S.C. Fang, S.Y. Wu and C.J. Lin,
A relaxed cutting plane
method for solving linear semiinfinite
programming problems.
Journal of
Optimization Theory and Applications, 99(1998), 759779.

C.J. Lin, S.C. Fang , and S.Y. Wu. An Unconstrained
Convex Programming Approach for Solving Linear SemiInfinite Programming
Problems , SIAM Journal on Optimization 8, 1998, 443456.
Papers before 1998 are not listed.
cjlin@csie.ntu.edu.tw