Under some standard assumption on the objective function, we observe that the convergence of the diagonal initial approximation of the LBFGS
scheme is R-linear.
In the VKF method, introduced in , the minimization is done with the LBFGS optimization method, that produces both the state estimate and a low-storage approximation of the covariance (inverse hessian at the minimizer).
Thus, the LBFGS optimization routine provides low-storage approximation for both [([C.sup.p.sub.k]).sup.-1] and [C.sup.est.sub.k].
(c) Apply LBFGS to (2.2) to get an approximation [B.sup.*.sub.k] of ([C.sup.p.sub.k]).sup.-1].
(a) Minimize expression (2.1) using LBFGS to get the state estimate [x.sup.est.sub.k] and covariance estimate [B.sup.#.sub.k].
Comparison of Nocedal's LBFGS Code with HCL_UMin lbfgs on the Extended Rosenbrock Function
n f calls (LBFGS) f calls (HCL) Time (LBFGS) 1,000 49 58 0.13 10,000 53 58 1.59 100,000 52 58 33.04 n Time (HCL) 1,000 0.12 10,000 1.59 100,000 34.29 Table VII.
n f calls (LBFGS) f calls (HCL) Time (LBFGS) Time (HCL) 121 69 77 5.51 5.51 441 75 78 29.67 31.93 1681 77 75 181.96 183.21 3.2 Implicitly Restarted Arnoldi
In the latter case, the linear algebra performed by the optimization code consists of level-1 operations; since both LBFGS and HCL call level-1 BLAS routines when the vectors are in core, the two codes cannot differ greatly in performance.
As has been stated, the approach presented here uses the LBFGS algorithm directly within the context of the Kalman filter.
The aim of the current paper is to demonstrate the use of LBFGS within the standard (non-variational) formulation of the linear or extended Kalman filter.
Conclusions are then given in Section 5, and implementation details of the LBFGS algorithm are contained in Appendix A.