v2010.10.26 - Convex Optimization

v2010.10.26 - Convex Optimization v2010.10.26 - Convex Optimization

convexoptimization.com
from convexoptimization.com More from this publisher
12.07.2015 Views

628 APPENDIX A. LINEAR ALGEBRAThe (right-)eigenvectors of a diagonalizable matrix A∈ R m×m are linearlyindependent if and only if the left-eigenvectors are. So, matrix A hasa representation in terms of its right- and left-eigenvectors; from thediagonalization (1547), assuming 0 eigenvalues are ordered last,A =m∑λ i s i wi T =i=1k∑≤ mi=1λ i ≠0λ i s i w T i (1586)From the linearly independent dyads theorem (B.1.1.0.2), the dyads {s i w T i }must be independent because each set of eigenvectors are; hence rankA = k ,the number of nonzero eigenvalues. Complex eigenvectors and eigenvaluesare common for real matrices, and must come in complex conjugate pairs forthe summation to remain real. Assume that conjugate pairs of eigenvaluesappear in sequence. Given any particular conjugate pair from (1586), we getthe partial summationλ i s i w T i + λ ∗ i s ∗ iw ∗Ti = 2re(λ i s i w T i )= 2 ( res i re(λ i w T i ) − im s i im(λ i w T i ) ) (1587)where A.18 λ ∗ i λ i+1 , s ∗ i s i+1 , and w ∗ i w i+1 . Then (1586) isequivalently writtenA = 2 ∑ iλ ∈ Cλ i ≠0res 2i re(λ 2i w T 2i) − im s 2i im(λ 2i w T 2i) + ∑ jλ ∈ Rλ j ≠0λ j s j w T j (1588)The summation (1588) shows: A is a linear combination of real and imaginaryparts of its (right-)eigenvectors corresponding to nonzero eigenvalues. Thek vectors {re s i ∈ R m , ims i ∈ R m | λ i ≠0, i∈{1... m}} must therefore spanthe range of diagonalizable matrix A .The argument is similar regarding span of the left-eigenvectors. A.7.40 trace and matrix productFor X,A∈ R M×N+ (39)tr(X T A) = 0 ⇔ X ◦ A = A ◦ X = 0 (1589)A.18 Complex conjugate of w is denoted w ∗ . Conjugate transpose is denoted w H = w ∗T .

A.7. ZEROS 629For X,A∈ S M +[34,2.6.1 exer.2.8] [359,3.1]tr(XA) = 0 ⇔ XA = AX = 0 (1590)Proof. (⇐) Suppose XA = AX = 0. Then tr(XA)=0 is obvious.(⇒) Suppose tr(XA)=0. tr(XA)= tr( √ AX √ A) whose argument ispositive semidefinite by Corollary A.3.1.0.5. Trace of any square matrix isequivalent to the sum of its eigenvalues. Eigenvalues of a positive semidefinitematrix can total 0 if and only if each and every nonnegative eigenvalue is 0.The only positive semidefinite matrix, having all 0 eigenvalues, resides at theorigin; (confer (1614)) id est,√AX√A =(√X√A) T√X√A = 0 (1591)implying √ X √ A = 0 which in turn implies √ X( √ X √ A) √ A = XA = 0.Arguing similarly yields AX = 0.Diagonalizable matrices A and X are simultaneously diagonalizable if andonly if they are commutative under multiplication; [202,1.3.12] id est, iffthey share a complete set of eigenvectors.A.7.4.0.1 Example. An equivalence in nonisomorphic spaces.Identity (1590) leads to an unusual equivalence relating convex geometry totraditional linear algebra: The convex sets, given A ≽ 0{X | 〈X , A〉 = 0} ∩ {X ≽ 0} ≡ {X | N(X) ⊇ R(A)} ∩ {X ≽ 0} (1592)(one expressed in terms of a hyperplane, the other in terms of nullspace andrange) are equivalent only when symmetric matrix A is positive semidefinite.We might apply this equivalence to the geometric center subspace, forexample,S M c = {Y ∈ S M | Y 1 = 0}= {Y ∈ S M | N(Y ) ⊇ 1} = {Y ∈ S M | R(Y ) ⊆ N(1 T )}(1998)from which we derive (confer (994))S M c ∩ S M + ≡ {X ≽ 0 | 〈X , 11 T 〉 = 0} (1593)

A.7. ZEROS 629For X,A∈ S M +[34,2.6.1 exer.2.8] [359,3.1]tr(XA) = 0 ⇔ XA = AX = 0 (1590)Proof. (⇐) Suppose XA = AX = 0. Then tr(XA)=0 is obvious.(⇒) Suppose tr(XA)=0. tr(XA)= tr( √ AX √ A) whose argument ispositive semidefinite by Corollary A.3.1.0.5. Trace of any square matrix isequivalent to the sum of its eigenvalues. Eigenvalues of a positive semidefinitematrix can total 0 if and only if each and every nonnegative eigenvalue is 0.The only positive semidefinite matrix, having all 0 eigenvalues, resides at theorigin; (confer (1614)) id est,√AX√A =(√X√A) T√X√A = 0 (1591)implying √ X √ A = 0 which in turn implies √ X( √ X √ A) √ A = XA = 0.Arguing similarly yields AX = 0.Diagonalizable matrices A and X are simultaneously diagonalizable if andonly if they are commutative under multiplication; [202,1.3.12] id est, iffthey share a complete set of eigenvectors.A.7.4.0.1 Example. An equivalence in nonisomorphic spaces.Identity (1590) leads to an unusual equivalence relating convex geometry totraditional linear algebra: The convex sets, given A ≽ 0{X | 〈X , A〉 = 0} ∩ {X ≽ 0} ≡ {X | N(X) ⊇ R(A)} ∩ {X ≽ 0} (1592)(one expressed in terms of a hyperplane, the other in terms of nullspace andrange) are equivalent only when symmetric matrix A is positive semidefinite.We might apply this equivalence to the geometric center subspace, forexample,S M c = {Y ∈ S M | Y 1 = 0}= {Y ∈ S M | N(Y ) ⊇ 1} = {Y ∈ S M | R(Y ) ⊆ N(1 T )}(1998)from which we derive (confer (994))S M c ∩ S M + ≡ {X ≽ 0 | 〈X , 11 T 〉 = 0} (1593)

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!