22.10.2014 Views

THESE de DOCTORAT - cerfacs

THESE de DOCTORAT - cerfacs

THESE de DOCTORAT - cerfacs

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

3.4 GMRES 55<br />

Stopping criteria of GMRES<br />

In or<strong>de</strong>r to analyze the convergence of GMRES, it is important to know which mathematical<br />

properties of A <strong>de</strong>termine the size of ‖r‖. There are two main observations. The first is that<br />

GMRES converges monotonically ‖r m+1 ‖ ≤ ‖r m ‖. The reason is that ‖r m ‖ is as small as possible<br />

for the subspace K m . By enlarging K m to the space K m+1 , it is only possible to <strong>de</strong>crease the<br />

residual norm, or at worst leave it unchanged. The second is that after at most n steps the<br />

process must converge, at least in the absence of round errors ‖r n ‖ → 0.<br />

Many algorithms of numerical linear algebra satisfy a condition that is both stronger and simpler<br />

than stability. We say that an algorithm ˜f for a problem f is backward stable if ˜f (x) = f ( ˜x)<br />

for some approximated solution ˜x = x m . Here ˜x = x + δx and ˜f = f + δ f are a perturbed solution<br />

and a perturbed function respectively. The normwise backward error η is then <strong>de</strong>fined<br />

as:<br />

η = ‖x m − x‖<br />

‖x‖<br />

= ‖A−1 (b − r m ) − A −1 b‖<br />

‖A −1 b‖<br />

= ‖r m‖ 2<br />

‖b‖ 2<br />

(3.56)<br />

The backward error measures the distance between the data of the initial problem and those of<br />

the perturbed problem. The best η one can require from an algorithm is a backward error of<br />

the or<strong>de</strong>r of the machine precision. In practice, the approximation of the solution is acceptable<br />

when its backward error is lower than the uncertainty on the data.<br />

3.4.1 Preconditioning<br />

The convergence of a matrix iteration <strong>de</strong>pends on the properties of the matrix, i.e., the eigenvalues,<br />

the singular values, or sometimes other information. It is interesting to comprehend<br />

that in many cases, the problem of interest can be transformed so that the properties of the matrix<br />

are improved drastically. This process of ‘preconditioning’ is essential to most successful<br />

applications of iterative methods.<br />

Suppose we wish to solve an n × n nonsingular system Ax = b. For any nonsingular n × n<br />

matrix M, the system MAx = Mb has the same solution. If we solve this system iteratively, the<br />

convergence will <strong>de</strong>pend on the properties of MA instead of those of A. If this preconditioner<br />

M is well chosen, the problem may be solved much more rapidly. The best choice ever for a<br />

preconditioner is the inverse of the system matrix A, i.e., M = A −1 . Of course it would be<br />

ridiculous to use A −1 as a preconditioner since this is already the solution of the problem. The<br />

i<strong>de</strong>a is then to make M −1 as close as A as possible. In other words, to make the eigenvalues of<br />

M −1 A be close to 1 and ‖M −1 A − I‖ 2 small where I is the i<strong>de</strong>ntity matrix. In doing so, a quick<br />

convergence of the solution is expected.<br />

Two well known preconditioning methods are the Diagonal scaling or Jacobi (M −1 = diag(A))

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!