10.03.2015 Views

v2009.01.01 - Convex Optimization

v2009.01.01 - Convex Optimization

v2009.01.01 - Convex Optimization

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

A.3. PROPER STATEMENTS 549<br />

For A diagonalizable (A.5), A = SΛS −1 , (confer [287, p.255])<br />

rankA = rankδ(λ(A)) = rank Λ (1363)<br />

meaning, rank is equal to the number of nonzero eigenvalues in vector<br />

by the 0 eigenvalues theorem (A.7.3.0.1).<br />

(Ky Fan) For A,B∈ S n [48,1.2] (confer (1621))<br />

λ(A) ∆ = δ(Λ) (1364)<br />

tr(AB) ≤ λ(A) T λ(B) (1365)<br />

with equality (Theobald) when A and B are simultaneously<br />

diagonalizable [176] with the same ordering of eigenvalues.<br />

For A∈ R m×n and B ∈ R n×m<br />

tr(AB) = tr(BA) (1366)<br />

and η eigenvalues of the product and commuted product are identical,<br />

including their multiplicity; [176,1.3.20] id est,<br />

λ(AB) 1:η = λ(BA) 1:η , η ∆ =min{m , n} (1367)<br />

Any eigenvalues remaining are zero. By the 0 eigenvalues theorem<br />

(A.7.3.0.1),<br />

rank(AB) = rank(BA), AB and BA diagonalizable (1368)<br />

For any compatible matrices A,B [176,0.4]<br />

min{rankA, rankB} ≥ rank(AB) (1369)<br />

For A,B ∈ S n +<br />

rankA + rankB ≥ rank(A + B) ≥ min{rankA, rankB} ≥ rank(AB)<br />

(1370)<br />

For A,B ∈ S n + linearly independent (B.1.1),<br />

rankA + rankB = rank(A + B) > min{rankA, rankB} ≥ rank(AB)<br />

(1371)

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!