SVM
Maximum-margin
Primal problem
The hyperplane function is that:
and
So the distance between two hyperplanes is:
To find a hyperplane that has the maximum distance, we need to:
subject to the following constraints:
Dual problem
Due to lagrangian dual function:
The primal problem is equivalent to the following:
To solve problem, we set the gradient w.r.t. and equal to zero:
and
We know that is equal to a linear combination of , due to the sparsity of , we only need to find some support vectors.
When we replace in the Lagrange dual function, the problem is convert to :
subject to :
and
Complexity of the optimization:
For dual problem, we need to optimize N parameters. So the speed is proportional to the number of training examples.
Soft-margin
However, sometimes, there is no hyperplane that can split the data. We need to tolerate some examples, so the constraints equals to:
if , it means correct classified examples. If , it means incorrectly classified examples. So means how many examples are misclassified.
Primal problem:
subject to:
Dual problem:
To solve $min$ problem, gradient w.r.t. w,b, equal to zero:
When we replace in the Lagrange dual function, the problem is converted to :
subject to :
and
The only difference is the constraints on , this is so nice that the difference is such tiny. Because of this, the author received a big Award.
Some important parameters
C
The extent that we can tolerate the misclassified examples.
Does kernel hurts performance ?
Choosing incorrect kernel will definitely hurt the performance.
One-vs-One or One-vs-Rest
There’s no big difference.
overfitting
Max margin can avoid overfitting. It means even though there could be a very good accuracy on training data, the testing data is still have at least a slightly better results.
Feature or Model
Feature is the best thing. The more information, the more possible to classify correct.
Feature Selection before SVM ?
No, SVM’s max-margin regularization will handle the irrelevant features. [Still don’t know why: see here and here
How can SVM handle large number of features ?
The SVM is an approximate implementation of a bound on the generalization error, that depends on the margin (essentially the distance from the decision boundary to the nearest pattern from each class), but is independent of the dimensionality of the feature space (which is why using the kernel trick to map the data into a very high dimensional space isn’t such a bad idea as it might seem). So in principle SVMs should be highly resistant to over-fitting, but in practice this depends on the careful choice of C and the kernel parameters. Sadly, over-fitting can also occur quite easily when tuning the hyper-parameters as well.
The probability that random features could distinguish the two classes ?
Given 10 data points, 5 positive and 5 negative. Each data points have d random features. What’s the probability that there exists a hyperplane w that could distinguish these data points ? What’s the relations between the probability and the feature dimension d?
Suppose the probability that there exists a single dimension w that can split the data on a single dimension is p0. Then, if there are d dimensions, then the probability that there exists a single dimension w that can split the data is , is very high. That’s why, more features would let SVM overfit. But there exists some really informative features. Which would probably have a large margin, thus, SVM would still tend to choose the real informative features.
So, some conclusion:
- Additional uninformative features will let SVM to overfit, the more features there are, the higher probability to overfit.
- Real informative features tends to have much large margin then those uninformative features, so SVM still tends to omit those uninformative features. But this all depends on C. If C is large, which requires overfitting, then SVM have to use those uninformative features.
- Take home message: additional features extend the space to overfit, but SVM still tend not to go in that space. The tendency is controlled by C.
Some remaining questions:
- What’s the theoretical bounds that SVM has ?
- What’s the probability that there exists a hyperplane that can split a dataset W ? Stackoverflow link
##Reference