They implement the non-linear capability of MLPNN [48, 49] with some form of memory.
SVM and PNN outperforms all other techniques such as MLPNN, RNN, and RBFNN.
From the evaluation result given in Table 3, it is clear that MLPNN with resilient propagation is the most efficient training algorithm both in considerations of accuracy as well as the amount of time needed to execute the programs in all different setting such as A&E, D&E, and A+D & E.
p]) is the kth output of MLPNN corresponding to the input [[?
Two sets of MLPNN networks, one for the circular geometry and the other one for the rectangular one, have been implemented in a MATLAB code.
At the same time an improvement of the RMSE, and consequently of the performances of the MLPNN model, can be also noticed.
The proposed two hidden layer MLPNN has provided performance with the minimum training error which has been achieved after nearly 400 epochs as 2.
Consequently, a "Neural 3-D Smith Chart" has been formed by using the ANNs in the simple MLPNN structures as the nonlinear learning machines from the input space to the output space.
For given input x, the output of three-layer MLPNN
can be computed by:
Incorporating the three layers- input, output and intermediate (hidden), the MLPNN designates distinct roles for each.
The MLPNN was also used for predicting the clustered datasets because of its ability to accommodate multiple intermediate layers.
Table 1 (A-F): Weights of the parameters in the MLPNN training.