Regarding the total time, the new method is typically 2 to 5 times faster than LAMG. On the other hand, in the few cases where LAMG is faster, the total time needed by the new method is only modestly larger.
In fact, LAMG is better only for graphs with few nonzero entries per row.
For the new method, this is twice the number of main iterations (as seen in Algorithm 2, we use two smoothing iterations per preconditioner call), whereas for LAMG this is three times since LAMG uses three Gauss-Seidel smoothing iterations at each level.
[section] Because the numerical results reported in  are also focused on the computing time (and not on iteration count, complexities or so), we believe that the LAMG code was designed to be a reasonably fast implementation of the used method.
Regarding the other AMG methods, the new approach outperforms the state-of-the-art LAMG .
LIVNE, Lean Algebraic Multigrid (LAMG) Matlab Software, 2012.
The final combination of settings, denoted S2, assumes a later version of LAMG, code version 1.5.5.
LAMG options S2 as described in Table 5.1 are used.
For LAMG one can observe a slow increase in iteration count as the problem size is increased.
A comparison of the iteration counts for LAMG, JCG and AMGIR6 is given in Figure 5.1.
Significantly, for LAMG and for AMG1R6, the growth rate of the iteration counts as the problem size is increased is roughly the same, indicating the same basic algorithmic scaling behavior of the two codes.
Significantly, the maximum average nonzeros per row, which is a measure of matrix sparsity and its impact on communication for the parallel case, is strongly bounded for LAMG, showing very small growth with problem size, while for AMG1R6 this quantity grows very rapidly.