In the package there is a README file which details all options, data format, and library calls. The model selection tool and the python interface have a separate README under the directory python. The guide A practical guide to support vector classification shows beginners how to train/test their data. The paper LIBSVM : a library for support vector machines discusses the implementation of libsvm in detail.
See the change log.
Please cite the following document:
Chih-Chung Chang and Chih-Jen Lin, LIBSVM : a library for support vector machines, 2001. Software available at http://www.csie.ntu.edu.tw/~cjlin/libsvm
The bibtex format is as follows
@Manual{CC01a, author = {Chih-Chung Chang and Chih-Jen Lin}, title = {{LIBSVM}: a library for support vector machines}, year = {2001}, note = {Software available at {\tt http://www.csie.ntu.edu.tw/\verb"~"cjlin/libsvm}}, }
The libsvm license ("the modified BSD license") is compatible with many free software licenses such as GPL. Hence, it is very easy to use libsvm in your software. It can also be used in commercial products.
Yes, see libsvm tools
This usually happens if you compile the code on one machine and run it on another which has incompatible libraries. Try to recompile the program on that machine or use static linking.
Build it as a project by chooising "Win32 Application." On the other hand, for "svm-train" and "svm-predict" you want to choose "Win32 Console Application." After libsvm 2.5, you can also use the file Makefile.win. See details in README.
You need to open a command window and type svmtrain.exe to see all options. Some examples are in README file.
libsvm uses the so called "sparse" format where zero values do not need to be stored. Hence a data with attributes
1 0 2 0is represented as
1:1 3:2
Currently libsvm supports only numerical data. You may have to change non-numerical data to numerical. For example, you can use several binary attributes to represent a categorical attribute.
This is a controversial issue. The kernel evaluation (i.e. inner product) of sparse vectors is slower so the total training time can be at least twice or three times of that using the dense format. However, we cannot support only dense format as then we CANNOT handle extremely sparse cases. Simplicity of the code is another concern. Right now we decide to support the sparse format only.
obj is the optimal objective value of the dual SVM problem. rho is the bias term in the decision function sgn(w^Tx - rho). nSV and nBSV are number of support vectors and bounded support vectors (i.e., alpha_i = C). nu-svm is an somehwat eqivalent form of C-SVM where C is replaced by nu. nu simply shows the corresponding parameter. More details are in libsvm document.
After the parameters, each line represents a support vector.
Support vectors are listed in the order of "labels" listed earlier.
(i.e., those from the first class in the "labels" list are
grouped first, and so on.)
If k is the total number of classes,
in front of each support vector, there are
k-1 coefficients
y*alpha where alpha are dual solution of the
following two class problems:
1 vs j, 2 vs j, ..., j-1 vs j, j vs j+1, j vs j+2, ..., j vs k
and y=1 in first j-1 coefficients, y=-1 in the remaining
k-j coefficients.
For example, if there are 4 classes, the file looks like:
+-+-+-+--------------------+ |1|1|1| | |v|v|v| SVs from class 1 | |2|3|4| | +-+-+-+--------------------+ |1|2|2| | |v|v|v| SVs from class 2 | |2|3|4| | +-+-+-+--------------------+ |1|2|3| | |v|v|v| SVs from class 3 | |3|3|4| | +-+-+-+--------------------+ |1|2|3| | |v|v|v| SVs from class 4 | |4|4|4| | +-+-+-+--------------------+
We have float as the default as you can store more numbers in the cache. In general this is good enough but for few difficult cases (e.g. C very very large) where solutions are huge numbers, it might be possible that the numerical precision is not enough using only float.
In general we suggest you to try the RBF kernel first. A recent result by Keerthi and Lin ( download paper here) shows that if RBF is used with model selection, then there is no need to consider the linear kernel. The kernel matrix using sigmoid may not be positive definite and in general it's accuracy is not better than RBF. (see the paper by Lin and Lin ( download paper here). Polynomial kernels are ok but if a high degree is used, numerical difficulties tend to happen (thinking about dth power of (<1) goes to 0 and (>1) goes to infinity).
No, at this point libsvm solves linear/nonlinear SVMs by the same way. Note that there are some possible tricks to save training/testing time if the linear kernel is used. Hence libsvm is NOT particularly efficient for linear SVM, especially for problems whose number of data is much larger than number of attributes. If you plan to solve this type of problems, you may want to check bsvm, which includes an efficient implementation for linear SVMs. More details can be found in the following study: K.-M. Chung, W.-C. Kao, T. Sun, and C.-J. Lin. Decomposition Methods for Linear Support Vector Machines
On the other hand, you do not really need to solve linear SVMs. See the previous question about choosing kernels for details.
This usually happens when the data are overfitted. If attributes of your data are in large ranges, try to scale them. Then the region of appropriate parameters may be larger. Note that there is a scale program in libsvm.
Yes, you can do the following:
svm-scale -s scaling_parameters traing_data > scaled_traing_data
svm-scale -r scaling_parameters test_data > scaled_test_data
For the linear scaling method, if the RBF kernel is used and parameter selection is conducted, there is no difference. Assume Mi and mi are respectively the maximal and minimal values of the ith attribute. Scaling to [0,1] means
x'=(x-mi)/(Mi-mi)For [-1,1],
x''=2(x-mi)/(Mi-mi)-1.In the RBF kernel,
x'-y'=(x-y)/(Mi-mi), x''-y''=2(x-y)/(Mi-mi).Hence, using (C,g) on the [0,1]-scaled data is the same as (C,g/2) on the [-1,1]-scaled data.
Though the performance is the same, the computational time may be different. For data with many zero entries, [0,1]-scaling keeps the sparsity of input data and hance may save the time.
Try to use the model selection tool grid.py in the python directory find out good parameters. To see the importance of model selction, please see my talk: Can support vector machines become a major classification method ?
Yes, there is a -wi options. For example, if you use
svm-train -s 0 -c 10 -w1 1 -w-1 5 data_file
the penalty for class "-1" is larger. Note that this -w option is for C-SVC only.
Basically they are the same thing but with different parameters. The range of C is from zero to infinity but nu is always between [0,1]. A nice property of nu is that it is related to the ratio of support vectors and the ratio of the training error.
You may want to check your data. Each training/testing data must be in one line. It cannot be separated. In addition, you have to remove empty lines.
In theory libsvm guarantees to converge if the kernel matrix is positive semidefinite. After version 2.4 it can also handle non-PSD kernels such as the sigmoid (tanh). Therefore, this means you are handling ill-conditioned situations (e.g. too large/small parameters) so numerical difficulties occur.
This may happen for some difficult cases (e.g. -c is large). You can try to use a looser stopping tolerance with -e. If that still doesn't work, you may want to contact us. We can show you some tricks on improving the training time.
We print out decision values for regression. For classification, as we have to solve several binary SVMs for multi-class cases, we do not print out them. However, you can easily modify the program for printing them. Just add
printf("%f ", sum);after the line
sum -= model->rho[p++];of the file svm.cpp. Note that for binary SVC, in the implementation, the class of the first training point is treated as y = +1 in the decision function.
This may happen only when the cache is large, but each cached row is not large enough. Note: This problem is specific to gnu C library which is used in linux. The solution is as follows:
In our program we have malloc() which uses two methods to allocate memory from kernel. One is sbrk() and another is mmap(). sbrk is faster, but mmap has a larger address space. So malloc uses mmap only if the wanted memory size is larger than some threshold (default 128k). In the case where each row is not large enough (#elements < 128k/sizeof(float)) but we need a large cache , the address space for sbrk can be exhausted. The solution is to lower the threshold to force malloc to use mmap and increase the maximum number of chunks to allocate with mmap.
Therefore, in the main program (i.e. svm-train.c) you want to have
#include <malloc.h>and then in main():
mallopt(M_MMAP_THRESHOLD, 32768); mallopt(M_MMAP_MAX,1000000);You can also set the environment variables instead of writing thems in the program:
$ M_MMAP_MAX=1000000 M_MMAP_THRESHOLD=32768 ./svm-train .....More information can be found by
$ info libc "Malloc Tunable Parameters"
Simply update svm.cpp:
#if 1 void info(char *fmt,...)to
#if 0 void info(char *fmt,...)
The reason why we have two functions is as follows: For the RBF kernel exp(-g |xi - xj|^2), if we calculate xi - xj first and then the norm square, there are 3n operations. Thus we consider exp(-g (|xi|^2 - 2dot(xi,xj) +|xj|^2)) and by calculating all |xi|^2 in the beginning, the number of operations is reduced to 2n. This is for the training. For prediction we cannot do this so a regular subrouting using that 3n operations is needed. The easiest way to have your own kernel is to put the same code in these two subourines by replacing any kernel.
It is one-against-one. We chose it after doing the following comparison: C.-W. Hsu and C.-J. Lin. A comparison of methods for multi-class support vector machines , IEEE Transactions on Neural Networks, 13(2002), 415-425.
Cross validation is used for selecting good parameters. After finding them, you want to re-train the whole data without the -v option.
Right now we use the default seed so each time when you run svm-train -v, folds of validation data are the same. To have different seeds, you can add the following code in svm-train.c:
#include <time.h>and in the beginning of the subroutine do_cross_validation(),
srand(time(0));
It is extremely easy. Taking c-svc for example, only two places of svm.cpp have to be changed. First, modify the following line of solve_c_svc from
s.Solve(l, SVC_Q(*prob,*param,y), minus_ones, y, alpha, Cp, Cn, param->eps, si, param->shrinking, param->cal_partial, param->gamma);to
s.Solve(l, SVC_Q(*prob,*param,y), minus_ones, y, alpha, INF, INF, param->eps, si, param->shrinking, param->cal_partial, param->gamma);Second, in the class of SVC_Q, declair C as a private variable:
double C;In the constructor we assign it to param.C:
this->C = param.C;Than in the the subroutine get_Q, after the for loop, add
if(i >= start && i < len) data[i] += 1/C;
We intend to have this zero division. Under the IEEE floating point standard, zero division will returns infinity. Then with the operations later to bound it, things go back to normal numbers without any problem. In general no warning messages happen. On some computaers, you may need to add an option (e.g. -mieee on alpha). Reasons of doing so are described in libsvm document.
You have pre-specified true positive rate in mind and then search for parameters which achieve similar cross-validation accuracy.
For Microsoft windows, first press the "print screen" key on the keyboard. Open "Microsoft Paint" (included in Windows) and press "ctrl-v." Then you can clip the part of picture which you want. For X windows, you can use the program "xv" to grab the picture of the svm-toy window.
The program svm-toy assumes both attributes (i.e. x-axis and y-axis values) are in (0,1). Hence you want to scale your data to between a small positive number and a number less than but very close to 1.
Taking windows/svm-toy.cpp as an example, you need to modify it and the difference from the original file is as the following: (for five classes of data)
30,32c30 < RGB(200,0,200), < RGB(0,160,0), < RGB(160,0,0) --- > RGB(200,0,200) 39c37 < HBRUSH brush1, brush2, brush3, brush4, brush5; --- > HBRUSH brush1, brush2, brush3; 113,114d110 < brush4 = CreateSolidBrush(colors[7]); < brush5 = CreateSolidBrush(colors[8]); 155,157c151 < else if(v==3) return brush3; < else if(v==4) return brush4; < else return brush5; --- > else return brush3; 325d318 < int colornum = 5; 327c320 < svm_node *x_space = new svm_node[colornum * prob.l]; --- > svm_node *x_space = new svm_node[3 * prob.l]; 333,338c326,331 < x_space[colornum * i].index = 1; < x_space[colornum * i].value = q->x; < x_space[colornum * i + 1].index = 2; < x_space[colornum * i + 1].value = q->y; < x_space[colornum * i + 2].index = -1; < prob.x[i] = &x_space[colornum * i]; --- > x_space[3 * i].index = 1; > x_space[3 * i].value = q->x; > x_space[3 * i + 1].index = 2; > x_space[3 * i + 1].value = q->y; > x_space[3 * i + 2].index = -1; > prob.x[i] = &x_space[3 * i]; 397c390 < if(current_value > 5) current_value = 1; --- > if(current_value > 3) current_value = 1;
They are the same thing. We just rewrote the C++ code in Java.
This depends on the VM you used. We have seen good VM which leads the Java version to be quite competitive with the C++ code. (though still slower)
You should try to increase the maximum Java heap size. For example,
java -Xmx256m svm_train.java ...sets the maximum heap size to 256M.
It seems the dll file is version dependent. So far we haven't found out a good solution. Please email us if you have any good suggestions.
To modify the interface, follow the instructions given in http://www.swig.org/Doc1.1/HTML/Python.html#n2
If you just want to build DLL for a different python version, you need only Visual C++ and but not SWIG:
Yes, an example is here