v2009.01.01 - Convex Optimization
v2009.01.01 - Convex Optimization v2009.01.01 - Convex Optimization
46 CHAPTER 2. CONVEX GEOMETRY and where ◦ denotes the Hadamard product 2.10 of matrices [134,1.1.4]. The adjoint operation A T on a matrix can therefore be defined in like manner: 〈Y , A T Z〉 ∆ = 〈AY , Z〉 (33) Take any element C 1 from a matrix-valued set in R p×k , for example, and consider any particular dimensionally compatible real vectors v and w . Then vector inner-product of C 1 with vw T is 〈vw T , C 1 〉 = 〈v , C 1 w〉 = v T C 1 w = tr(wv T C 1 ) = 1 T( (vw T )◦ C 1 ) 1 (34) Further, linear bijective vectorization is distributive with respect to Hadamard product; id est, vec(Y ◦ Z) = vec(Y ) ◦ vec(Z) (35) 2.2.0.0.1 Example. Application of the image theorem. Suppose the set C ⊆ R p×k is convex. Then for any particular vectors v ∈R p and w ∈R k , the set of vector inner-products Y ∆ = v T Cw = 〈vw T , C〉 ⊆ R (36) is convex. This result is a consequence of the image theorem. Yet it is easy to show directly that convex combination of elements from Y remains an element of Y . 2.11 More generally, vw T in (36) may be replaced with any particular matrix Z ∈ R p×k while convexity of the set 〈Z , C〉⊆ R persists. Further, by replacing v and w with any particular respective matrices U and W of dimension compatible with all elements of convex set C , then set U T CW is convex by the image theorem because it is a linear mapping of C . 2.10 The Hadamard product is a simple entrywise product of corresponding entries from two matrices of like size; id est, not necessarily square. A commutative operation, the Hadamard product can be extracted from within a Kronecker product. [176, p.475] 2.11 To verify that, take any two elements C 1 and C 2 from the convex matrix-valued set C , and then form the vector inner-products (36) that are two elements of Y by definition. Now make a convex combination of those inner products; videlicet, for 0≤µ≤1 µ 〈vw T , C 1 〉 + (1 − µ) 〈vw T , C 2 〉 = 〈vw T , µ C 1 + (1 − µ)C 2 〉 The two sides are equivalent by linearity of inner product. The right-hand side remains a vector inner-product of vw T with an element µ C 1 + (1 − µ)C 2 from the convex set C ; hence, it belongs to Y . Since that holds true for any two elements from Y , then it must be a convex set.
2.2. VECTORIZED-MATRIX INNER PRODUCT 47 2.2.1 Frobenius’ 2.2.1.0.1 Definition. Isomorphic. An isomorphism of a vector space is a transformation equivalent to a linear bijective mapping. The image and inverse image under the transformation operator are then called isomorphic vector spaces. △ Isomorphic vector spaces are characterized by preservation of adjacency; id est, if v and w are points connected by a line segment in one vector space, then their images will also be connected by a line segment. Two Euclidean bodies may be considered isomorphic if there exists an isomorphism, of their vector spaces, under which the bodies correspond. [320,I.1] Projection (E) is not an isomorphism, for example; hence, perfect reconstruction (inverse projection) is generally impossible without additional information. When Z =Y ∈ R p×k in (31), Frobenius’ norm is resultant from vector inner-product; (confer (1566)) ‖Y ‖ 2 F = ‖ vec Y ‖2 2 = 〈Y , Y 〉 = tr(Y T Y ) = ∑ i,j Y 2 ij = ∑ i λ(Y T Y ) i = ∑ i σ(Y ) 2 i (37) where λ(Y T Y ) i is the i th eigenvalue of Y T Y , and σ(Y ) i the i th singular value of Y . Were Y a normal matrix (A.5.2), then σ(Y )= |λ(Y )| [344,8.1] thus ‖Y ‖ 2 F = ∑ i λ(Y ) 2 i = ‖λ(Y )‖ 2 2 = 〈λ(Y ), λ(Y )〉 = 〈Y , Y 〉 (38) The converse (38) ⇒ normal matrix Y also holds. [176,2.5.4] Because the metrics are equivalent ‖ vec X −vec Y ‖ 2 = ‖X −Y ‖ F (39) and because vectorization (30) is a linear bijective map, then vector space R p×k is isometrically isomorphic with vector space R pk in the Euclidean sense and vec is an isometric isomorphism of R p×k . 2.12 Because of this Euclidean structure, all the known results from convex analysis in Euclidean space R n carry over directly to the space of real matrices R p×k . 2.12 Given matrix A, its range R(A) (2.5) is isometrically isomorphic with its vectorized range vec R(A) but not with R(vec A).
- Page 1 and 2: DATTORRO CONVEX OPTIMIZATION & EUCL
- Page 3 and 4: Convex Optimization & Euclidean Dis
- Page 5 and 6: for Jennie Columba ♦ Antonio ♦
- Page 7 and 8: Prelude The constant demands of my
- Page 9 and 10: Convex Optimization & Euclidean Dis
- Page 11 and 12: CONVEX OPTIMIZATION & EUCLIDEAN DIS
- Page 13 and 14: List of Figures 1 Overview 19 1 Ori
- Page 15 and 16: LIST OF FIGURES 15 3 Geometry of co
- Page 17 and 18: LIST OF FIGURES 17 126 Decomposing
- Page 19 and 20: Chapter 1 Overview Convex Optimizat
- Page 21 and 22: ˇx 4 ˇx 3 ˇx 2 Figure 2: Applica
- Page 23 and 24: 23 Figure 4: This coarsely discreti
- Page 25 and 26: ases (biorthogonal expansion). We e
- Page 27 and 28: 27 Figure 7: These bees construct a
- Page 29 and 30: that establish its membership to th
- Page 31 and 32: 31 appendices Provided so as to be
- Page 33 and 34: Chapter 2 Convex geometry Convexity
- Page 35 and 36: 2.1. CONVEX SET 35 2.1.2 linear ind
- Page 37 and 38: 2.1. CONVEX SET 37 2.1.6 empty set
- Page 39 and 40: 2.1. CONVEX SET 39 2.1.7.1 Line int
- Page 41 and 42: 2.1. CONVEX SET 41 (a) R 2 (b) R 3
- Page 43 and 44: 2.1. CONVEX SET 43 This theorem in
- Page 45: 2.2. VECTORIZED-MATRIX INNER PRODUC
- Page 49 and 50: 2.2. VECTORIZED-MATRIX INNER PRODUC
- Page 51 and 52: 2.2. VECTORIZED-MATRIX INNER PRODUC
- Page 53 and 54: 2.2. VECTORIZED-MATRIX INNER PRODUC
- Page 55 and 56: 2.3. HULLS 55 Figure 16: Convex hul
- Page 57 and 58: 2.3. HULLS 57 The affine hull of tw
- Page 59 and 60: 2.3. HULLS 59 2.3.2 Convex hull The
- Page 61 and 62: 2.3. HULLS 61 In case k = N , the F
- Page 63 and 64: 2.3. HULLS 63 2.3.2.0.3 Exercise. C
- Page 65 and 66: 2.3. HULLS 65 Figure 20: A simplici
- Page 67 and 68: 2.4. HALFSPACE, HYPERPLANE 67 H + a
- Page 69 and 70: 2.4. HALFSPACE, HYPERPLANE 69 1 1
- Page 71 and 72: 2.4. HALFSPACE, HYPERPLANE 71 Recal
- Page 73 and 74: 2.4. HALFSPACE, HYPERPLANE 73 C H
- Page 75 and 76: 2.4. HALFSPACE, HYPERPLANE 75 2.4.2
- Page 77 and 78: 2.4. HALFSPACE, HYPERPLANE 77 (conf
- Page 79 and 80: 2.5. SUBSPACE REPRESENTATIONS 79 Ra
- Page 81 and 82: 2.5. SUBSPACE REPRESENTATIONS 81 If
- Page 83 and 84: 2.6. EXTREME, EXPOSED 83 2.6 Extrem
- Page 85 and 86: 2.6. EXTREME, EXPOSED 85 A B C D Fi
- Page 87 and 88: 2.7. CONES 87 2.6.1.3.1 Definition.
- Page 89 and 90: 2.7. CONES 89 0 Figure 30: Boundary
- Page 91 and 92: 2.7. CONES 91 2.7.2 Convex cone We
- Page 93 and 94: 2.7. CONES 93 Then a pointed closed
- Page 95 and 96: 2.7. CONES 95 A pointed closed conv
2.2. VECTORIZED-MATRIX INNER PRODUCT 47<br />
2.2.1 Frobenius’<br />
2.2.1.0.1 Definition. Isomorphic.<br />
An isomorphism of a vector space is a transformation equivalent to a linear<br />
bijective mapping. The image and inverse image under the transformation<br />
operator are then called isomorphic vector spaces.<br />
△<br />
Isomorphic vector spaces are characterized by preservation of adjacency;<br />
id est, if v and w are points connected by a line segment in one vector space,<br />
then their images will also be connected by a line segment. Two Euclidean<br />
bodies may be considered isomorphic if there exists an isomorphism, of their<br />
vector spaces, under which the bodies correspond. [320,I.1] Projection (E)<br />
is not an isomorphism, for example; hence, perfect reconstruction (inverse<br />
projection) is generally impossible without additional information.<br />
When Z =Y ∈ R p×k in (31), Frobenius’ norm is resultant from vector<br />
inner-product; (confer (1566))<br />
‖Y ‖ 2 F = ‖ vec Y ‖2 2 = 〈Y , Y 〉 = tr(Y T Y )<br />
= ∑ i,j<br />
Y 2<br />
ij = ∑ i<br />
λ(Y T Y ) i = ∑ i<br />
σ(Y ) 2 i<br />
(37)<br />
where λ(Y T Y ) i is the i th eigenvalue of Y T Y , and σ(Y ) i the i th singular<br />
value of Y . Were Y a normal matrix (A.5.2), then σ(Y )= |λ(Y )|<br />
[344,8.1] thus<br />
‖Y ‖ 2 F = ∑ i<br />
λ(Y ) 2 i = ‖λ(Y )‖ 2 2 = 〈λ(Y ), λ(Y )〉 = 〈Y , Y 〉 (38)<br />
The converse (38) ⇒ normal matrix Y also holds. [176,2.5.4]<br />
Because the metrics are equivalent<br />
‖ vec X −vec Y ‖ 2 = ‖X −Y ‖ F (39)<br />
and because vectorization (30) is a linear bijective map, then vector space<br />
R p×k is isometrically isomorphic with vector space R pk in the Euclidean sense<br />
and vec is an isometric isomorphism of R p×k . 2.12 Because of this Euclidean<br />
structure, all the known results from convex analysis in Euclidean space R n<br />
carry over directly to the space of real matrices R p×k .<br />
2.12 Given matrix A, its range R(A) (2.5) is isometrically isomorphic with its vectorized<br />
range vec R(A) but not with R(vec A).