The network architecture used for the inverse model problem is the multilayered feedforward neural network (MFNN) [11, 12].
The backpropagation learning algorithm is the most widely used training process for MFNN with differentiable activation functions today.
3 and 4 is the corresponding learning rate parameter, which is an important design factor of an MFNN.
In traditional MFNN applications, a network topology is chosen for a requested mapping before the training begins.
The most important two design factors of the MFNN are the network topology and the learning rate parameter, which will be investigated in the following sections: 1) The network topology, which includes the number of hidden layers and the number of neurons in each hidden layer; and 2) the learning rate parameter, which is denoted as [mu] in Eqs.