Moreover, the proposed approach is shown to outperform the pure DACG method.
We propose here to perform some preliminary iterations using the DACG eigensolver [16,23,24], which is based on the preconditioned conjugate gradient (nonlinear) minimization of the Rayleigh quotient.
Compute [[??].sub.0], an RFSAI preconditioner for A, for j:=1 to [n.sub.eig] (1) Choose [x.sub.0] such that [[??].sup.T] [x.sub.0] = 0; (2) Compute [u.sub.0], an approximation to by the DACG procedure with initial vector [x.sub.0], preconditioner [[??].sub.0] and tolerance [[tau].sub.DACG]; (3) k: = 0, [[theta].sub.k] := [u.sup.T.sub.k]A[u.sub.k]; (4) while [parallel]A[u.sub.k] - [[theta].sub.k][u.sub.k] [parallel] > [tau][[theta].sub.k] and k < IMAX do (1) Q:= [[??] [u.sub.k]].
In Table 5, we report the number of matrix vector products and CPU times for Newton-DACG as compared with "pure" DACG, that is, with DACG run with a final tolerance of [10.sup.-8].
The DACG method, if run until the final tolerance [tau] = [10.sup.-8], is very slow due to the very small relative separation between consecutive eigenvalues [[xi].sub.j] = ([[lambda].sub.j+1] - [[lambda].sub.j])/[[lambda].sub.j], which drives convergence of this PCG-like solver [16,25].