14.11.2013 Views

GS534: Solving the 1D Poisson equation using finite differences ...

GS534: Solving the 1D Poisson equation using finite differences ...

GS534: Solving the 1D Poisson equation using finite differences ...

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

⎡ (∆x) 2 ⎤<br />

⎢<br />

(∆x)<br />

⎢<br />

2 ⎥⎥⎥⎥⎥⎦<br />

f = ⎢ (∆x) 2<br />

(16c)<br />

⎢ (∆x) 2<br />

⎢ 1<br />

⎣ 2 (∆x)2<br />

Note that in this case N is <strong>the</strong> number of unknowns so that it doesn’t include <strong>the</strong> first nodal point<br />

(for which <strong>the</strong> value is prescribed). N is <strong>the</strong>refore <strong>the</strong> number of nodal points minus one.<br />

4.1. Solution of <strong>the</strong> discrete system: Gaussian elimination<br />

The matrix vector system can be solved <strong>using</strong> various methods. A brute force method is to compute<br />

<strong>the</strong> inverse A −1 of <strong>the</strong> matrix and find <strong>the</strong> solution by matrix-vector multiplication<br />

u = A −1 f<br />

(17)<br />

It is generally quite expensive to compute <strong>the</strong> exact inverse of a matrix. One exception is for <strong>the</strong><br />

inversion of a diagonal matrix which is trivial. A more efficient method to solve (15) is to use<br />

Gaussian elimination which generally requires O(N 3 ) operations for a full matrix but is significantly<br />

more efficient for <strong>the</strong> sparse matrices that occur in <strong>finite</strong> difference methods. A general<br />

approach is to decompose <strong>the</strong> matrix A into a upper and lower triangular system A = LU which<br />

can be solved efficiently by substitution. The efficiency and memory requirements of Gaussian<br />

elimination is much improved for special forms of matrix A. For example, when A is symmetric it<br />

can be stored in approximately half <strong>the</strong> amount of memory and a more efficient LDL T where D is<br />

a diagonal matrix. If <strong>the</strong> matrix is symmetric and positive de<strong>finite</strong> one can compute <strong>the</strong> even<br />

more efficient Cholesky decomposition LL T (see e.g., Golub and Van Loan, 1989 or Chapter 2 or<br />

Press et al., 1992).<br />

4.2. Matrix storage<br />

An advantage for matrices arriving from <strong>the</strong> discretization of differential <strong>equation</strong>s by <strong>finite</strong> <strong>differences</strong><br />

or <strong>finite</strong> elements is that <strong>the</strong>y are generally sparse, with non-zero coefficients only within<br />

a certain band surrounding <strong>the</strong> diagonal. For <strong>the</strong> matrix (16a) we have bandwidth one and because<br />

of symmetry we can store <strong>the</strong> matrix by just <strong>the</strong> (main) diagonal ([2, 2, . . . . , 2, 1]) and <strong>the</strong> diagonal<br />

line of coefficients right above it ([−1, −1, ..., −1]). The storage requirements are <strong>the</strong>n 2N<br />

which compares quite favorably with N 2 for <strong>the</strong> full matrix. The algorithm for Gaussian elimination<br />

for a full matrix has to be modified to make use of <strong>the</strong> different storage, but <strong>the</strong> coefficients<br />

outside <strong>the</strong> band are not affected in any way. Coefficients inside <strong>the</strong> band may be zero after discretization,<br />

but <strong>the</strong>y need to be stored explicity since <strong>the</strong>y may become non-zero (’<strong>the</strong> band gets<br />

filled’) during <strong>the</strong> matrix decomposition. This is not relevant for <strong>the</strong> matrix (12) but becomes<br />

important for matrices that result from 2D or 3D discretization. For example, <strong>the</strong> matrix that<br />

results from <strong>the</strong> discretization of <strong>the</strong> 2D <strong>Poisson</strong> <strong>equation</strong> is depicted in Figure 2b. This follows<br />

from a 2D extension of <strong>the</strong> stencil (9) on a equidistant grid with 5 grid points along <strong>the</strong> horizontal<br />

axis (verify). The banded matrices resulting from <strong>the</strong> discretization in Figure 2 have bandwidth<br />

B = 1 (<strong>1D</strong>) or B = 5 (2D example), where <strong>the</strong> bandwidth is defined as that value B for which all<br />

coefficients a ij are zero if j > i + B. For non-symmetric matrices we can make <strong>the</strong> distinction<br />

between upper and lower bandwidth. In general it is more efficient to minimize <strong>the</strong> bandwidth. If<br />

for example, one has a 2D grid of 10x5 nodal points it is best to number <strong>the</strong> nodal points in <strong>the</strong><br />

vertical direction first. That will give a bandwidth of 5, compared to a bandwidth of 10 for<br />

6

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!