Support Vector Machine Without Tears- Part3 [Kernel Trick]

The previous two posts have dealt with hard and soft margin SVM.  In both cases our model used a linear (hyperplane) decision boundary.  The only difference between the two is that the soft margin classifier does not split the two classes perfectly because the data is not linearly separable.  We still used a hyperplane but … More Support Vector Machine Without Tears- Part3 [Kernel Trick]

Support Vector Machines Without Tears – Part 2 [Soft Margin]

Today I will continue with the topic of SVM and extend the discussion to include classification problems where the data is not linearly separable.  In the previous post I described the hard margin classifier where we derived its mathematical formulation and implemented it in a spreadsheet. Hard Margin Classifier Recap We decided to use a … More Support Vector Machines Without Tears – Part 2 [Soft Margin]

Support Vector Machines Without Tears – Part 1 [Hard Margin]

I have been on a machine learning MOOCS binge in the last year.  I must say some are really amazing.  The one weakness so far is the treatment of support vector machines (SVM).  It’s a shame really since other popular classification algorithms are covered.  I should mention that there are two exceptions, Andrew Ng’s Machine … More Support Vector Machines Without Tears – Part 1 [Hard Margin]