File:PenalizedSVM1.png
Given a training dataset ( x 1 , y 1 ) , . . . , ( x n , y n ) , where x i is a d -tuple of the d input parameters i , and y i ∈ − 1 , 1 where y 1 = 1 means i belongs to the dichotomy, and y i = − 1 , the opposite. The SVM divides the space by a linear boundary: f ( x ) = ∑ j = 1 d w j h j + b where w = ( w 1 , . . . , w d ) are the coefficients of the hyper-plan and b denotes the in- tercept of the hyperplane. The output for an example from the test set x t e s t would be y t e s t = s i g n [ f ( x t e s t ) ] . So, the example belongs to the dichotomy if f ( x t e s t ) > 0. {\displaystyle {\begin{aligned}&\quad \ \;{\text{Given a training dataset }}(x_{1},y_{1}),...,(x_{n},y_{n}),{\text{ where }}x_{i}{\text{ is a }}d{\text{-tuple of the }}d{\text{ input}}\\&{\text{parameters }}i{\text{, and }}y_{i}\in -1,1{\text{ where }}y_{1}=1{\text{ means }}i{\text{ belongs to the dichotomy, and }}y_{i}=-1,\\&{\text{the opposite. The SVM divides the space by a linear boundary:}}\\\\&\qquad \qquad \qquad \qquad \qquad \qquad \qquad \ \ f(x)={\begin{matrix}\sum _{j=1}^{d}w_{j}h_{j}+b\end{matrix}}\\\\&\quad \ \;{\text{where }}w=(w_{1},...,w_{d}){\text{ are the coefficients of the hyper-plan and }}b{\text{ denotes the in-}}\\&{\text{tercept of the hyperplane. The output for an example from the test set }}x_{t}est{\text{ would be}}\\&y_{t}est=sign[f(x_{t}est)].{\text{ So, the example belongs to the dichotomy if }}f(x_{t}est)>0.\end{aligned}}}