Greville's Method for Preconditioning Least Squares ... - Projects
Greville's Method for Preconditioning Least Squares ... - Projects Greville's Method for Preconditioning Least Squares ... - Projects
12 Proof If Assumption 1 holds, we have R(A) = R(M T ). (5.10) Then there exists a nonsingular matrix C ∈ R n×n such that A = M T C. Hence, R(M T MA) = R(M T MA) (5.11) = R(M T MM T C) (5.12) = R(M T MM T ) (5.13) = R(M T M) (5.14) = R(M T ) (5.15) = R(A). (5.16) In the above equalities we used the relationship R(MM T ) = R(M). By Theorem 1 we complete the proof. ✷ 6 Breakdown-free Condition In this section, we assume without losing generality that the first r columns of A are linearly independent. Hence, R(A) = span{a 1 , . . . , a r}, (6.1) where rank(A) = r, and a i (i = 1, . . . , r) is the i-th column of A. The reason is that we can incorporate a column pivoting in Algorithm 1 easily. Then we have, a i ∈ span{a 1 , . . . , a r}, i = r + 1, . . . , n. (6.2) In this case, after performing Algorithm 1 with numerical dropping, matrix V can be written in the form V = [v 1 , . . . , v r, v r+1 , . . . , v n]. (6.3) If we denote [v 1 , . . . , v r] as U r, then U r = A(I − K)I r, I r = There exists a matrix H ∈ R r×n−r such that H can be rank deficient. Then, V is given by [ ] Ir×r . (6.4) 0 [v r+1 , . . . , v n] = U rH = A(I − K)I rH. (6.5) V = [v 1 , . . . , v r, v r+1 , . . . , v n] (6.6) = [U r, U rH] (6.7) = U r [ I r×r H ] (6.8) [ ] Ir×r = A(I − K) [ I r×r H ] (6.9) 0 [ ] Ir×r H = A(I − K) . (6.10) 0 0
13 Hence, [ ] M = (I − K)F −1 Ir×r 0 H T (I − K) T A T . (6.11) 0 From the above equation, we can also see that the difference between the full column rank case and the rank deficient case lies in [ ] Ir×r 0 H T , (6.12) 0 which should be an identity matrix when A is full column rank. If there is no numerical dropping, M will be the Moore-Penrose inverse of A, [ ] A † = (I − ˆK) ˆF −1 Ir×r 0 Ĥ T (I − 0 ˆK) T A T . (6.13) Comparing Equation (6.11) and Equation (6.13), we have the following theorem. Theorem 5 Let A ∈ R m×n . If Assumption 1 holds, then the following relationships hold, where M denotes the approximate Moore-Penrose inverse constructed by Algorithm 1, R(M) = R(A † ) = R(A T ). (6.14) Based on Theorem 3 and Theorem 5, we have the following theorem which ensures that the GMRES method can determine a solution to the preconditioned problem MAx = Mb before breakdown happens for any b ∈ R m . Theorem 6 Let A ∈ R m×n . If Assumption 1 holds, then GMRES can determine a least squares solution to min ‖MAx − Mb‖ x∈R n 2 (6.15) before breakdown happens for all b ∈ R m , where M is constructed by Algorithm 1. Proof According to Theorem 2.1 in Brown and Walker [6], we only need to prove which is equivalent to N (MA) = N (A T M T ), (6.16) R(MA) = R(A T M T ). (6.17) Using the result from Theorem 3, there exists a nonsingular matrix T ∈ R n×n such that A = M T T . Hence, On the other hand, R(MA) = R(MM T T ) (6.18) = R(MM T ) (6.19) = R(M). (6.20) R(A T M T ) = R(A T AT −1 ) (6.21) = R(A T A) (6.22) = R(A T ). (6.23) The proof is completed using Theorem 5. ✷
- Page 1 and 2: Noname manuscript No. (will be inse
- Page 3 and 4: 3 Greville’s method. In Section 3
- Page 5 and 6: 5 we can express A † i in a unifi
- Page 7 and 8: 7 Remark 4 If k i is sparse, when w
- Page 9 and 10: 9 6. k j = k j + uT i aj f i (e i
- Page 11: 11 where C is a nonsingular matrix.
- Page 15 and 16: 15 where ∆E is an error matrix. H
- Page 17 and 18: 17 Theorem 10 Let A ∈ R m×n . If
- Page 19 and 20: 19 Table 1 Information on the test
- Page 21 and 22: 21 preconditioner is usually much d
- Page 23: 23 problems are equivalent to the o
12<br />
Proof If Assumption 1 holds, we have<br />
R(A) = R(M T ). (5.10)<br />
Then there exists a nonsingular matrix C ∈ R n×n such that A = M T C. Hence,<br />
R(M T MA) = R(M T MA) (5.11)<br />
= R(M T MM T C) (5.12)<br />
= R(M T MM T ) (5.13)<br />
= R(M T M) (5.14)<br />
= R(M T ) (5.15)<br />
= R(A). (5.16)<br />
In the above equalities we used the relationship R(MM T ) = R(M).<br />
By Theorem 1 we complete the proof. ✷<br />
6 Breakdown-free Condition<br />
In this section, we assume without losing generality that the first r columns of A are<br />
linearly independent. Hence,<br />
R(A) = span{a 1 , . . . , a r}, (6.1)<br />
where rank(A) = r, and a i (i = 1, . . . , r) is the i-th column of A. The reason is that we<br />
can incorporate a column pivoting in Algorithm 1 easily. Then we have,<br />
a i ∈ span{a 1 , . . . , a r}, i = r + 1, . . . , n. (6.2)<br />
In this case, after per<strong>for</strong>ming Algorithm 1 with numerical dropping, matrix V can be<br />
written in the <strong>for</strong>m<br />
V = [v 1 , . . . , v r, v r+1 , . . . , v n]. (6.3)<br />
If we denote [v 1 , . . . , v r] as U r, then<br />
U r = A(I − K)I r, I r =<br />
There exists a matrix H ∈ R r×n−r such that<br />
H can be rank deficient. Then, V is given by<br />
[ ] Ir×r<br />
. (6.4)<br />
0<br />
[v r+1 , . . . , v n] = U rH = A(I − K)I rH. (6.5)<br />
V = [v 1 , . . . , v r, v r+1 , . . . , v n] (6.6)<br />
= [U r, U rH] (6.7)<br />
= U r [ I r×r H ] (6.8)<br />
[ ] Ir×r<br />
= A(I − K) [ I r×r H ] (6.9)<br />
0<br />
[ ] Ir×r H<br />
= A(I − K)<br />
. (6.10)<br />
0 0