General Strategy of Iteration Methods⚓︎
We will discuss the general Splitting Strategy.
General Splitting Strategy⚓︎
For linear system \(\boldsymbol{Ax}=\boldsymbol{b}\), let \(\boldsymbol{A}=\boldsymbol{M}-\boldsymbol{N}\). Then:
We can have the iteration:
If \(\boldsymbol{M}\) is invertible, we have:
Denote \(\boldsymbol{M}^{-1}\boldsymbol{b}=\boldsymbol{f}, \boldsymbol{M}^{-1}\boldsymbol{N}=\boldsymbol{G}\), we get:
where \(\boldsymbol{G}\) is called iterative matrix.
For example, Jacobi method: \(\boldsymbol{M}=\boldsymbol{D}, \boldsymbol{N}=\boldsymbol{E}+\boldsymbol{F}\).
Speed of Convergence⚓︎
Question
: Is it convergent? If so, how fast?
If \(\boldsymbol{x}^*\) is a solution of \(\boldsymbol{Ax}=\boldsymbol{b}\), then:
Therefore, \(\boldsymbol{x}^*\) is a fixed point of the iteration.
Note that \(\boldsymbol{x}^{\left( k+1 \right)}=\boldsymbol{Gx}^{\left( k \right)}+\boldsymbol{f}\), then:
Define \(\boldsymbol{e}^{\left( k \right)}=\boldsymbol{x}^{\left( k+1 \right)}-\boldsymbol{x}^*\). We get:
This is the error equation.
Method 1⚓︎
Question
: Is \(\boldsymbol{e}^{\left( k \right)}\rightarrow 0\) as \(k\rightarrow +\infty\) ? If so, how fast?
Theorem
: If the spectral radius \(\rho \left( \boldsymbol{G} \right) <1\), it implies that \((\mathbf{I}-\boldsymbol{G})\) is invertible and \(\boldsymbol{x}^{\left( k \right)}\rightarrow \boldsymbol{x}^*\).
The inverse is also true: If \(\boldsymbol{x}^{\left( k \right)}\) converges for any \(\boldsymbol{f}\) and \(\boldsymbol{x}^{(0)}\), then \(\rho \left( \boldsymbol{G} \right) <1\). (How to prove?)
Remarks
: For an iterative method, if it is convergent:
- \(\boldsymbol{b}\) can be arbitrary;
- \(\boldsymbol{x}^{(0)}\) can be arbitrary;
- The method always converges to \(\boldsymbol{x}^*\).
Question
: If \(\rho \left( \boldsymbol{G} \right) >1\), there exists an initial guess that \(\boldsymbol{x}^{(k)}\) does not converge. If \(\rho \left( \boldsymbol{G} \right) =1\), what happens?
Theorem
: If \(\left\| \boldsymbol{G} \right\| <1\) (any matrix norm is fine), then \(\rho \left( \boldsymbol{G} \right) <1\).
Method 2⚓︎
Definition
: A matrix \(\boldsymbol{A}\in \mathbb{R} ^{m\times m}\) is called (weakly) diagonally dominant if:
and is called (strongly) diagonally dominant if:
Theorem
: If \(\boldsymbol{A}\) is a strongly diagonally dominant matrix, then the associate Jacobi, Gauss-Seidel iterations converge for any \(\boldsymbol{x}^{(0)}\).
Theorem
: If \(\boldsymbol{A}\) is symmetric with positive diagonal elements, and \(\omega \in \left( 0,2 \right)\), then SOR converges for any \(\boldsymbol{x}^{(0)}\) if and only if \(\boldsymbol{A}\) is positive definite.
Theorem
(Gershgorin Theroem): Given \(\boldsymbol{A}\in \mathbb{R} ^{m\times m}\), the eigenvalues of \(\boldsymbol{A}\) are contained in the union of the following disks:
Let:
We have: