CMACSCopperbelt Mining, Agriculture and Commercial Show (Zambia)
CMACSCentre for Marine and Coastal Studies (UK)
CMACSCentral Monitor and Control System (NASA)
CMACSCertified Member of the Australian Computer Society
Copyright 1988-2018, All rights reserved.
References in periodicals archive ?
Su, "Fault diagnosis of steam turbine-generator sets using CMAC neural network approach and portable diagnosis apparatus implementation," Lecture Notes in Computer Science, vol.
Chao, "CMAC neural network application on lead-acid batteries residual capacity estimation," Lecture Notes in Computer Science, vol.
Yang, "Melancholia diagnosis based on GDS evaluation and meridian energy measurement using CMAC neural network approach," WSEAS Transactions on Information Science and Applications, vol.
Table 2 shows the sample data used to train the CMAC neural network.
An output value can be obtained after quantization, concatenation, excited address coding, and totaling of the excited address weightings in the CMAC neural network.
Quantization, The input data for the CMAC neural network developed in the present study all fall within a given range, that is, [[X.sub.min],[X.sub.max]].
Excited Address Coding and CMAC Output Calculation.
In the CMAC neural network developed in the present study, the weightings stored in the memory lattice are updated using the method of the steepest descent [15, 16], that is,
The memory consumption of each layer in the CMAC is related to the number of bits per group (m).
Thus, the total number of memory addresses in the present CMAC is equal to 8 x 6 x 32 = 1536.
Convergence of CMAC. The convergence properties of CMAC neural networks have been extensively examined in the literature [17].
Determine CMAC parameter settings (i.e., quantization step size: 64, memory layer: 48 bits, 8 groups, 6 bits per group).