Specifically, we build K = 5 classifiers the stand-alone SSSOM, the MQE based SSSOM, the barycenter based SSSOM, the minimum neuron based SSSOM, and the maximum neuron based SSSOM and show how, for different classes, none of these is the best and all would mutually benefit from each other, namely.
MQE = 1/N [N.summation over (n=1)] [parallel] [[bar.X].sub.n] - [[bar.w].sub.i] [parallel], (2)
Basically, the lower the MQE of the BMU is, the more the scenario features vector is similar to its weight vector and, thus, the more the knowledge is learnt by the SSSOM.
The classification of a new input to a g class with the MQE based SSSOM proceeds as follows: its [MQE.sub.g], is calculated as in (2) and, then, it is assigned to the class with the larger PDF value for the calculated MQE.
In Tables 1-5, the performances for the MQE based, the barycenter based, the minimum neuron based, the maximum neuron based, and the stand-alone SSSOMs, for each class, are reported.
This procedure was based on the hypothesis that, the model with the smallest residual MQE
, after the first training iteration, has a better estimation than the other configuration to reach a smaller MQE
even with a higher number of iterations.
O Tempo de Retencao atingiu um efeito principal significativo, F(4, 184) = 7,432, MQE
= 28,289, p < 0,05, [[eta].sup.2.sub.p] = 0,14 (ver Figura 3).
Although model (1,0,0) (1,1,1) had the lowest values of AIC, SBC, and HQC, this model also presented the highest value in all of the validation tests using the tools APE, MAPE, MdAPE, MQE
, RSME, MAE, and U-Theil (Table 2).
With the MQE of each repetition, the coefficient of variation was estimated, which permitted to assess the stability of the representation of the data by SOM.
As observed in Figure 2, the more neurons, the higher the coefficient of variation of the MQE; this indicates that a map with four units at each dimension (16 neurons) would be more proper for the data analyzed.
2) If the two models are compared for deciding whether or not to group the observations in blocks, block methods are chosen because they provide better MQE and AMQE statistics.
MQE = 1/n [sigma][[e.sup.2].sub.i]; AMQE = [square root]1/n [sigma] [([e.sub.i]/[[sigma].sub.i]).sup.2],