Binary-class Cross Validation with Different Criteria: LIBLINEAR

The setting is similar to that for LIBSVM, but due to the internal parameter selection supported in LIBLINEAR, we directly provide the code for you.


How to Run this Tool

  1. Download and extract the zip file.
  2. Assign the global variable
    	double (*evaluation_function)(const size_t, const double *, const int *) = auc;
    in eval.cpp to the evaluation function you preferred. You can also assign precision, recall, fscore, bac, or ap here.
  3. Compile the code.
    	make clean; make

How to Display Multiple Evaluation Values

The setting is the same as that for LIBSVM.


Parameter Selection

In LIBLINEAR, the -C option applies cross validation on a grid of values to select the best regularization parameter. The code has been extended for any specified evaluation function. See the following example.

	$ ./train -C -s 0 heart_scale
	Doing parameter search with 5-fold cross validation.
	...
	AUC = 0.898944
	log2c=  -8.00	rate=89.8944
	AUC = 0.899056
	log2c=  -7.00	rate=89.9056
	AUC = 0.900111
	log2c=  -6.00	rate=90.0111
	AUC = 0.900333
	log2c=  -5.00	rate=90.0333
	AUC = 0.900778
	log2c=  -4.00	rate=90.0778
	AUC = 0.900111
	log2c=  -3.00	rate=90.0111
	AUC = 0.900222
	log2c=  -2.00	rate=90.0222
	AUC = 0.898889
	...
	Best C = 0.0625  CV accuracy = 90.0778%    

We assume the evaluation function satisfies the property that a better model gives a higher value.


How to Add New Evaluation Functions

The setting is the same as that for LIBSVM.


MATLAB Support

Through the LIBLINEAR MATLAB interface, for different criteria we support
  1. cross validation
  2. parameter selection
For example, you can run
    >> m = train(y, x, '-C -s 0')
and then
    >> m(1)
to get the selected regularization parameter. Similarly, you can do
    >> m = train(y, x, '-v 3')
for cross validation.

Unfortunately, the code for evaluating new instances via a trained model is not directly available yet. You can consider modifying the do_binary_predict.m designed for LIBSVM. However, you need to replace svmtrain and svmpredict with train and predict, respectively.


Python Support

The situation is similar to MATLAB though we support only

  1. parameter selection
That is, cross validation is not directly supported now.

The code for evaluating new instances via a trained model is not available yet. However, some criteria such as the fscore can be easily calculated via the predicted labels. For example, you can use

    >>> p_label, p_acc, p_val = predict(y, x, m)  
to get the predicted labels. Then in Python it's easy to compare predicted and true labels (y).


Please contact Chih-Jen Lin for any question.