Stability⚓︎
Stability is for the algorithms.
Two concepts:
Problem
: \(f: X\mapsto Y\)
Algorithm
: \(\tilde{f}: X\mapsto Y\)
The algorithm \(\tilde{f}\) is used to solve the problem \(f\).
Backward Stability⚓︎
Definition of Backward Stability⚓︎
Definition
(Backward Stability): An algorithm \(\tilde{f}\) is called backward stable if for each \(x\in X\) :
for some \(\tilde{x}\) with \(\frac{\left\| x-\tilde{x} \right\|}{\left\| x \right\|}\leqslant O\left( \varepsilon _{\mathrm{machine}} \right)\).
In words, a backward stable algorithm gives exactly the right solution to a nearly right problem.
Theorem
: If \(\tilde{f}\) is a backward stable algorithm for a problem \(f: X\mapsto Y\) with condition number \(\kappa \left( f \right)\) , then the relative error satisfies:
Proof
: \(\tilde{f}\) is backward stable: \(\forall x\in X\) , \(\exists \tilde{x}\in X\) such that \(\tilde{f}\left( x \right) =f\left( \tilde{x} \right) , \frac{\left\| x-\tilde{x} \right\|}{\left\| x \right\|}\leqslant O\left( \varepsilon _{\mathrm{machine}} \right)\) . Then we can get:
Remark
: If \(\boldsymbol{Ax}=\boldsymbol{b}\) is ill-conditioned, one must expect to lose \(\log _{10}\kappa \left( \boldsymbol{A} \right)\) digits in computing the solution with a backward stable method except under very special cases.
Example of Backward Stability⚓︎
Question
: How can we know an algorithm is backward stable or not? (tough and tedious)
Example
: Forward substitution is backward stable.
Consider \(\boldsymbol{Rx}=\boldsymbol{b}\) , where \(\boldsymbol{R}\in \mathbb{R} ^{m\times m}\) is a lower triangular matrix. The forward substitution steps are:
where \(r_{ij}\) is the element of \(\boldsymbol{R}\) .
Denote \(+,-,\times ,\div\) as exact operations and \(\oplus ,\ominus ,\otimes ,\oslash\) as operations with error.
Case 1⚓︎
Case \(m=1\) : Exact version:
Computed version:
Introduce \(\left( 1+\varepsilon _1 \right) \left( 1+\varepsilon _1\prime \right) =1\) . We can rewrite as:
The outcome can be viewed as an exact solution for a slightly perturbed problem.
Case 2⚓︎
Case \(m=2\) :
Step 1:
Step 2:
Introduce \(\left( 1+\varepsilon _i \right) \left( 1+\varepsilon _i\prime \right) =1\) , then:
Then:
We can claim that solving \(\boldsymbol{Rx}=\boldsymbol{b}\) by a computer with a numerical solution \(\hat{\boldsymbol{x}}\) can be viewed as \(\left( \boldsymbol{R}+\delta \boldsymbol{R} \right) \hat{\boldsymbol{x}}=\boldsymbol{b}\) exactly. Also:
General Case⚓︎
For general \(m\) , \(\hat{\boldsymbol{x}}\) satisfies:
Finally, we can claim that forward substitution algorithm computes the solution \(\hat{\boldsymbol{x}}\) of \(\boldsymbol{Rx}=\boldsymbol{b}\) satisfying
for some \(\delta \boldsymbol{R}\) satisfying
Also, forward and backward substitutions are all backward stable.
General Stability⚓︎
Definition of General Stability⚓︎
Problem
: \(f: X\mapsto Y\)
Algorithm
: \(\tilde{f}: X\mapsto Y\)
Definition
: \(\tilde{f}\) is called stable if for \(\forall x\in X\) ,
for some \(\tilde{x}\) satisfying \(\frac{\left\| \tilde{x}-x \right\|}{\left\| x \right\|}=O\left( \varepsilon _{\mathrm{machine}} \right)\).
Conclusion
: If \(\tilde{f}\) is backward stable, then it is also stable.
In words, a stable algorithm gives nearly the right solution to nearly the right algorithm.
Conclusions on Gaussian Elimination⚓︎
Theorem
: Let \(\boldsymbol{A}=\boldsymbol{LU}\) of a nonsingular matrix \(\boldsymbol{A}\in \mathbb{R} ^{m\times m}\) be computed by Gaussian Elimination without pivoting, then:
for some \(\delta \boldsymbol{A}\in \mathbb{R} ^{m\times m}\) satisfying \(\frac{\left\| \delta \boldsymbol{A} \right\|}{\left\| \boldsymbol{L} \right\| \left\| \boldsymbol{U} \right\|}=O\left( \varepsilon _{\mathrm{machine}} \right)\).
If \(\left\| \boldsymbol{L} \right\| \left\| \boldsymbol{U} \right\| =O\left( \left\| \boldsymbol{A} \right\| \right)\) , then Gaussian Elimination without pivoting is stable. Otherwise, the method is not stable.
Theorem
: Gaussian Elimination with partial pivoting is backward stable:
for some \(\delta \boldsymbol{A}\in \mathbb{R} ^{m\times m}\) satisfying \(\frac{\left\| \delta \boldsymbol{A} \right\|}{\left\| \boldsymbol{A} \right\|}=O\left( \rho \cdot \varepsilon _{\mathrm{machine}} \right)\) where
is called growth factor.
Theorem
: If \(\boldsymbol{A}\) is symmetric positive definite (SPD), then Cholesky Factorization is backward stable:
for some \(\delta \boldsymbol{A}\) satisfying \(\frac{\left\| \delta \boldsymbol{A} \right\|}{\left\| \boldsymbol{A} \right\|}=O\left( \varepsilon _{\mathrm{machine}} \right)\).