A Support Vector Machine, or SVM, is a non-parametric supervised learning model. For non-linear classification and regression, they utilise the kernel trick to map inputs to high-dimensional feature spaces. SVMs construct a hyper-plane or set of hyper-planes in a high or infinite dimensional space, which can be used for classification, regression or other tasks. Intuitively, a good separation is achieved by the hyper-plane that has the largest distance to the nearest training data points of any class (so-called functional margin), since in general the larger the margin the lower the generalization error of the classifier. The figure to the right shows the decision function for a linearly separable problem, with three samples on the margin boundaries, called “support vectors”.
Source: scikit-learn
Paper | Code | Results | Date | Stars |
---|
Task | Papers | Share |
---|---|---|
Classification | 74 | 11.82% |
BIG-bench Machine Learning | 36 | 5.75% |
EEG | 24 | 3.83% |
Anomaly Detection | 19 | 3.04% |
Electroencephalogram (EEG) | 19 | 3.04% |
Image Classification | 15 | 2.40% |
Emotion Recognition | 15 | 2.40% |
General Classification | 14 | 2.24% |
Sentiment Analysis | 13 | 2.08% |
Component | Type |
|
---|---|---|
🤖 No Components Found | You can add them if they exist; e.g. Mask R-CNN uses RoIAlign |