Master the math behind fast linear solvers.
This clear guide explains how conjugate gradient methods work through a variational lens, linking ideas like Krylov subspaces, Lanczos vectors, and preconditioning to practical algorithms.
In accessible chapters, you’ll see how to turn a linear system into efficient iterative steps, understand when methods converge, and learn how special cases (symmetric, nonsymmetric, indefinite) change the approach. The text ties theory to algorithm design, showing how error bounds arise from polynomial approximation and how stable implementations like SYMMLQ and MINRES are constructed."synopsis" may belong to another edition of this title.