Abstract: One-class SVM splits the feature space into two parts by a sphere. Inside or on the sphere are the normal data objects while outside the sphere are outliers. Since no labels are available for normal data and outliers, training one-class SVM belongs to unsupervised learning. In previous research the model parameters for one-class SVM are chosen a priori and manually and in any case, such dealings are not persuasive as an underlying mechanism. We approach this problem by iteratively optimizing the primary objective function through genetic algorithms and thus automatically implement model selection for one-class SVM. The model selection procedure embodies the principle of structural risk minimization for one-class SVM. The algorithms and their performance are validated by experiments.
Qun Chang , Xiaolong Wang , Yimeng Lin and Daniel S. Yeung , 2007. Automatic Model Selection for One-Class SVM. International Journal of Soft Computing, 2: 307-312.