Classical Iterative Methods⚓︎
We cover three strategies of Classical Iterations: Jacobi, Gauss-Seidel and SOR(Successive Over-Relaxation) method.
Task
: Solve linear system \(\boldsymbol{Ax}=\boldsymbol{b}\).
Strategy
: Iterate \(\boldsymbol{x}^{\left( 0 \right)}\rightarrow \boldsymbol{x}^{\left( 1 \right)}\rightarrow \boldsymbol{x}^{\left( 2 \right)}\rightarrow \cdots \rightarrow \boldsymbol{x}^{\left( k \right)}\rightarrow \boldsymbol{x}^{\left( k+1 \right)}\rightarrow \cdots\)
Jacobi Iteration⚓︎
Component-wise Viewpoint⚓︎
Let us discuss the component-wise viewpoint (good for coding) of Jacobi Iteration.
We examine \(i\) -th equation in \(\boldsymbol{Ax}=\boldsymbol{b}\):
Assume \(x_j\ (j\ne i)\) are known (as the current approximation), we have:
If \(a_{ii} \ne 0\), then:
This is the formula for Jacobi Iteration.
Matrix Viewpoint⚓︎
Let us now discuss the matrix viewpoint (good for analysis) of Jacobi Iteration.
We split \(\boldsymbol{A}\) as:
where \(\boldsymbol{D}\) is the diagonal part of \(\boldsymbol{A}\), and \(\boldsymbol{E}\) and \(\boldsymbol{F}\) are the lower triangular part and upper triangular part of negative \(\boldsymbol{A}\). Then:
We get:
If \(\boldsymbol{D}\) is invertible, we have:
This is the matrix representation for Jacobi Iteration. The iterative matrix is \(\boldsymbol{M}_{\mathrm{jacobi}}=\boldsymbol{D}^{-1}\left( \boldsymbol{E}+\boldsymbol{F} \right)\).
Gauss-Seidel Iteration⚓︎
Using the most updated values for \({x_j}^{\left( k \right)}\), we have the algorithm below:
For \(i=1,2 \cdots , m\):
End
This is Forward Gauss-Seidel Iteration.
From:
We get:
This is the matrix expression for Gauss-Seidel Iteration. The iterative matrix is \(\boldsymbol{M}_{\mathrm{FGS}}=\left( \boldsymbol{D}-\boldsymbol{E} \right) ^{-1}\boldsymbol{F}\).
We have another option (Backward Gauss-Seidel Iteration):
The iterative matrix for it is \(\boldsymbol{M}_{\mathrm{BGS}}=\left( \boldsymbol{D}-\boldsymbol{F} \right) ^{-1}\boldsymbol{E}\). What is the component-wise formula for this option?
Note
: Symmetric Gauss-Seidel Iteration iterates \(i=1,2,\cdots ,m-1,m\) , followed by \(i=m,m-1,\cdots ,2,1\). A symmetric Gauss-Seidel iteration is a forward Gauss-Seidel Iteration followed by a backward Gauss-Seidel Iteration. The iterative matrix is \(\boldsymbol{M}_{\mathrm{SGS}}=\boldsymbol{M}_{\mathrm{BGS}}\cdot \boldsymbol{M}_{\mathrm{FGS}}=\left( \boldsymbol{D}-\boldsymbol{F} \right) ^{-1}\boldsymbol{E}\left( \boldsymbol{D}-\boldsymbol{E} \right) ^{-1}\boldsymbol{F}\).
Successive Over-Relaxation (SOR)⚓︎
Introduction to SOR⚓︎
Pick \(\omega \ne 0\). For \(\boldsymbol{Ax}=\boldsymbol{b}\), we have:
Thus:
This is the matrix representation of SOR iteration.
What is the component-wise formula for SOR?
Discussion on SOR⚓︎
Geometric explanation of SOR:
When \(\omega \in \left( 1,2 \right)\), it is called Over-Relaxation.
When \(\omega\) is picked as the so-called optimal relaxation parameter, SOR outperforms Jacobi or Gauss-Seidel significantly.
where \(\boldsymbol{G}\) is the iterative matrix of Jacobi Iteration, and \(\rho \left( \boldsymbol{G} \right)\) is the spectral radius of \(\boldsymbol{G}\)
This formula is rarely used. In practice, we use trial and error to find \(\omega\).