The setting is similar to that for LIBSVM, but due to the internal parameter selection supported in LIBLINEAR, we directly provide the code for you.
double (*evaluation_function)(const size_t, const double *, const int *) = auc;in eval.cpp to the evaluation function you preferred. You can also assign precision, recall, fscore, bac, or ap here.
make clean; make
The setting is the same as that for LIBSVM.
In LIBLINEAR, the -C option applies cross validation on a grid of values to select the best regularization parameter. The code has been extended for any specified evaluation function. See the following example.
$ ./train -C -s 0 heart_scale Doing parameter search with 5-fold cross validation. ... AUC = 0.898944 log2c= -8.00 rate=89.8944 AUC = 0.899056 log2c= -7.00 rate=89.9056 AUC = 0.900111 log2c= -6.00 rate=90.0111 AUC = 0.900333 log2c= -5.00 rate=90.0333 AUC = 0.900778 log2c= -4.00 rate=90.0778 AUC = 0.900111 log2c= -3.00 rate=90.0111 AUC = 0.900222 log2c= -2.00 rate=90.0222 AUC = 0.898889 ... Best C = 0.0625 CV accuracy = 90.0778%
We assume the evaluation function satisfies the property that a better model gives a higher value.
The setting is the same as that for LIBSVM.
>> m = train(y, x, '-C -s 0')and then
>> m(1)to get the selected regularization parameter. Similarly, you can do
>> m = train(y, x, '-v 3')for cross validation.
Unfortunately, the code for evaluating new instances via a trained model is not directly available yet. You can consider modifying the do_binary_predict.m designed for LIBSVM. However, you need to replace svmtrain and svmpredict with train and predict, respectively.
The situation is similar to MATLAB though we support only
The code for evaluating new instances via a trained model is not available yet. However, some criteria such as the fscore can be easily calculated via the predicted labels. For example, you can use
>>> p_label, p_acc, p_val = predict(y, x, m)to get the predicted labels. Then in Python it's easy to compare predicted and true labels (y).