Greville's Method for Preconditioning Least Squares ... - Projects
Greville's Method for Preconditioning Least Squares ... - Projects
Greville's Method for Preconditioning Least Squares ... - Projects
You also want an ePaper? Increase the reach of your titles
YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.
11<br />
where C is a nonsingular matrix. Hence, we have<br />
R(M T ) = R(A). (5.4)<br />
For the general case, K and F are still nonsingular. However, the expression <strong>for</strong> V is<br />
not straight<strong>for</strong>ward. To simplify the problem, we make the following assumption.<br />
Assumption 1<br />
1. There is no zero column in A.<br />
2. Our preconditioning algorithm can detect all the linear independence in A.<br />
The definition of v i can be rewritten as follows.<br />
v i =<br />
{ ai − Ak i ∈ R(A i ) ∉ R(A i−1 ) if a i ∉ R(A i−1 )<br />
(A † i−1 )T k i ∈ R(A i−1 ) = R(A i ) if a i ∈ R(A i−1 )<br />
(5.5)<br />
Hence, we have<br />
span{v 1 , v 2 , . . . , v i } = span{a 1 , a 2 , . . . , a i }. (5.6)<br />
On the other hand, note the 12-th line of Algorithm 1<br />
M i = M i−1 + 1 f i<br />
(e i − k i )v T i , (5.7)<br />
which implies that every row of M i is a linear combination of the vectors v T k , 1 ≤ k ≤ i,<br />
i.e.,<br />
R(M T i ) = span{v 1 , . . . , v i }. (5.8)<br />
Based on the above discussions, we obtain the following theorem.<br />
Theorem 3 Let A ∈ R m×n , m ≥ n. If Assumption 1 holds, then we have the following<br />
relationships, where M is the approximate Moore-Penrose inverse constructed by<br />
Algorithm 1.<br />
R(M T ) = R(V ) = R(A) (5.9)<br />
Remark 8 In Assumption 1, we assume that our algorithms can detect all the linear<br />
independence in the columns of A. Hence, we allow such mistakes that a linearly<br />
dependent column is recognized as a linearly independent column. An extreme case is<br />
that we recognize all the columns of A as linearly independent, i.e., we take A as a full<br />
column rank matrix. In this sense, our assumption can always be satisfied.<br />
Hence, we proved that <strong>for</strong> any matrix A ∈ R m×n , R(A) = R(M T ). We have the<br />
following theorem.<br />
Theorem 4 If Assumption 1 holds, then <strong>for</strong> all b ∈ R m , the preconditioned least<br />
squares problem (5.1), where M is constructed by Algorithm 1, is equivalent to the<br />
original least squares problem (1.1).