KECA is characterized by its entropic components instead of the principal or variance-based components in PCA or KPCA, respectively.
In a word, KECA implements DR by projecting [phi](X) into a subspace [E.sub.l] spanned not by the eigenvectors associated with the top eigenvalues but by entropic components contributing most to the Renyi entropy estimator [??](p) .
Due to the fact that KECA is sensitive to different bandwidth coefficients [sigma] , OKECA is proposed to fill this gap and improve performances of KECA on DR.
The entropic components multiplied by the rotation matrix can obtain more (or equal) information potential than that of the KECA even using fewer components .
In order to alleviate the problems existing in OKECA, this section presents how to extend KECA to its nongreedy L1-norm version.
Jenssen  established a semisupervised learning (SSL) algorithm for classification using KECA. This SSL-based classifier was trained by both labeled and unlabeled data to build the kernel matrixsuch that it can map the data to KFS appropriately .
This also reflects data transformation of KECA in input space.
In contrast to KPCA which performs data transformation and dimension reduction by selecting eigenvalues and corresponding eigenvectors of the kernel matrix merely based on the size of the eigenvalues, KECA chooses the eigenvalues based on the contribution to entropy estimate.
To apply KECA algorithm to condition monitoring and fault diagnosis of HTGR, monitoring statistical control charts need to be constructed to reflect the operating condition.
SPE statistic, also known as the Q statistic, which represents the goodness of fit of the sample to the built model, is the error between the actual measured variable and the KECA model:
Hence, the traditional selected method of calculating the confidence limits is not applicable any longer in KECA. Here, KDE method is introduced to determine the confidence limits of the two statistics.