Section 3 presents the chaos control and learning algorithm for FCNN algorithm.
The FCNN is based on the MLP model with the chaotic neurons basing logistic mapping, shown in Figure 2.
The chaotic characteristics output [h.sub.i.sup.1] (k), the previous layer output [h.sub.i.sup.2] (k), the hidden layer output [h.sub.i] (k), and the model output result y(k) can be proposed to explain the dynamics of the FCNN model and described in vector form as follows:
In all the experimental results, SVM performs better than fcNN
. In particular, when M=10 and Dsize=90, our approach achieves its best classification performance of 99.75% using "SVM+ours".
In the above classification experiment, CCJU-TC has the better performance, followed by fcNN
and SVM, NB classification performance is the worst (Figure 5).
They obtained a better performance when a PCNN rather than an FCNN was used.
If all inputs are coupled, an FCNN is a good choice.
We compare the performances of our method with those of NN method (Fully Connected NN, FCNN).
By averaging across four classifiers, [SLLE-SC.sup.2] obtains the top accuracy, with 100% (fcNN
classifier), 94.8% (Naive Bayes classifier), and 97.9% (SVM classifier) in the leukemia, lung, and prostate datasets, respectively.
In this study, we include results from 15 representative single classifiers from diverse approaches: decision trees (C4.5), instance-based learners (fcNN: k nearest neighbor), kernel-based (SVM: Support Vector Machines), neural networks (SLP, MLP, and RBF-DDA), rule induction learners (OneR, JRip), and logistic regression, among others.
In AIDP versus ALL, four classifiers obtained a balanced accuracy above 0.80: fcNN, C4.5, MLP, and SVMLap.
The ranked first classifiers were MLP (2.17) for AIDP versus ALL, SVMPoly (2.37) for AMAN versus ALL, fcNN (1.40) for AMSAN versus ALL, and Naive Bayes (1.20) for MF versus ALL.