The above experiments illustrate that the proposed algorithms FBOSS and FBOSP have better performance than BOS, SBB, and AM in terms of computational efficiency, although [D.sup.T]D generated from the TV-based SparseSENSE model can be diagonalized by FFT.
We plot the relative error as the function of CPU time under the different parameters for FBOSS and FBOSP.
In this subsection, we analyze two relations between the proposed algorithms and the existing popular methods: (1) FBOSS and ADMM and (2) FBOSP and PDHG.
If [beta] = [[gamma].sup.-1], FBOSS is almost equivalent to ADMM, except that the former employs a semi-implicit scheme for the x-subproblem, while the latter employs an implicit scheme.
To illustrate the convergence of the presented algorithms, we show that the proposed algorithm FBOSS with the constant stepsize S converges by utilizing the nonexpansivity of proximity operator in this section.
Next, we give the main conclusion for the convergence of FBOSS with constant S under the nonexpansivity of proximity operator.
Repeat Given [x.sup.k] and [[delta].sup.k], compute [z.sup.k+1]; Given [w.sup.k] and [x.sup.k], compute [s.sup.k+1]; Given [w.sup.k], [x.sup.k] and, compute [w.sup.k+1]; Given [z.sup.k+1], [w.sup.k+1] and [[delta].sub.k+1], compute [x.sup.k+1]; Given [x.sup.k] and [x.sup.k+1], compute k [left arrow] k + 1 until [parallel][x.sup.k] - [x.sup.k-1] [parallel]/[parallel] [x.sup.k][parallel] < [epsilon] return Algorithm 1: FBOSS for the SparseSENSE model.