The classification problem is usually defined mathematically by a cost function to be minimized; for NSVC case, this function is the distortion function.
Finally, we obtain the expression of [[alpha].sub.ki] that will be used to construct the vicinal kernel for NSVC functions:
Results of NSVC. The basic idea of Neighboring Support Vector Classifier (NSVC) is to build new neighboring kernel functions, obtained by supervised clustering in feature space.
We evaluate the accuracy of each feature extraction method with NSVC. The results obtained are shown in Tables 1 and 2.
We compare the classification results of these five algorithms together with our proposed NSVC on the SLBP (Figure 8), ULBP (Figure 9), SLBP + DWT (Figure 10), SLBP + HOG (Figure 11), and ULBP + DWT (Figure 12).
In addition to its high performance, the NSVC is a new theoretical method of classification which combines two methods of classification belonging to two different families (unsupervised method: Fuzzy C-Means and supervised method: SVM).
In order to manage these descriptors and combine them in an optimized way, we propose using an advanced learning system the NSVC. It allows selecting the most important information through kernel weighting depending on their relevance.