CGNRConjugate Gradient Normal Residual
Copyright 1988-2018, All rights reserved.
References in periodicals archive ?
At the k-iteration, the parameters [[alpha].sub.k] and [[beta].sub.k] of the CGNR algorithm as well as the solution vector, the descent direction and the residue are updated:
As the CGNR exhibits a monotonic decrease of the error, the stop condition of the CGNR is fixed from the [RMSE.sub.k] with a threshold to reach.
Thus, the time cost associated to the problem is mainly due to the calculation of the elements of the Z-matrix and the Matrix-Vector Products (MVPs) of each CGNR iteration.
Since the condition number of the Fourier-type matrix [[[[??].sup.k.sub.j]].sup.N=[??].sub.k=-N, j=1] is uniformly bounded by Corollary 3.3, we need finitely many iterations of the CGNR method.
We use the CGNR to iteratively solve the linear system of equations posed by the MoM.
We use the CGNR [30] to solve Equation (7) which provides a kind of least mean squares solution.
The computation associated to the MVP of each CGNR iteration has a time cost O(MN).
In the experiments shown in here, the CGNR iteratively solves the linear system posed by the SRM until the Root Mean Square Error (RMSE) between the measured field and the field radiated by the equivalent currents reaches the 5%.