v2009.01.01 - Convex Optimization
v2009.01.01 - Convex Optimization v2009.01.01 - Convex Optimization
44 CHAPTER 2. CONVEX GEOMETRY 2.1.9 inverse image While epigraph and sublevel sets (3.1.7) of a convex function must be convex, it generally holds that image and inverse image of a convex function are not. Although there are many examples to the contrary, the most prominent are the affine functions: 2.1.9.0.1 Theorem. Image, Inverse image. [266,3] Let f be a mapping from R p×k to R m×n . The image of a convex set C under any affine function (3.1.6) is convex. The inverse image 2.8 of a convex set F , f(C) = {f(X) | X ∈ C} ⊆ R m×n (24) f −1 (F) = {X | f(X)∈ F} ⊆ R p×k (25) a single- or many-valued mapping, under any affine function f is convex. ⋄ In particular, any affine transformation of an affine set remains affine. [266, p.8] Ellipsoids are invariant to any [sic] affine transformation. Each converse of this two-part theorem is generally false; id est, given f affine, a convex image f(C) does not imply that set C is convex, and neither does a convex inverse image f −1 (F) imply set F is convex. A counter-example is easy to visualize when the affine function is an orthogonal projector [287] [215]: 2.1.9.0.2 Corollary. Projection on subspace. 2.9 (1809) [266,3] Orthogonal projection of a convex set on a subspace or nonempty affine set is another convex set. ⋄ Again, the converse is false. Shadows, for example, are umbral projections that can be convex when the body providing the shade is not. 2.8 See Example 2.9.1.0.2 or Example 3.1.7.0.2 for an application. 2.9 For hyperplane representations see2.4.2. For projection of convex sets on hyperplanes see [324,6.6]. A nonempty affine set is called an affine subset2.3.1. Orthogonal projection of points on affine subsets is reviewed inE.4.
2.2. VECTORIZED-MATRIX INNER PRODUCT 45 2.2 Vectorized-matrix inner product Euclidean space R n comes equipped with a linear vector inner-product 〈y,z〉 ∆ = y T z (26) We prefer those angle brackets to connote a geometric rather than algebraic perspective; e.g., vector y might represent a hyperplane normal (2.4.2). Two vectors are orthogonal (perpendicular) to one another if and only if their inner product vanishes; y ⊥ z ⇔ 〈y,z〉 = 0 (27) When orthogonal vectors each have unit norm, then they are orthonormal. A vector inner-product defines Euclidean norm (vector 2-norm) ‖y‖ 2 = ‖y‖ ∆ = √ y T y , ‖y‖ = 0 ⇔ y = 0 (28) For linear operation A on a vector, represented by a real matrix, the adjoint operation A T is transposition and defined for matrix A by [197,3.10] 〈y,A T z〉 ∆ = 〈Ay,z〉 (29) The vector inner-product for matrices is calculated just as it is for vectors; by first transforming a matrix in R p×k to a vector in R pk by concatenating its columns in the natural order. For lack of a better term, we shall call that linear bijective (one-to-one and onto [197, App.A1.2]) transformation vectorization. For example, the vectorization of Y = [y 1 y 2 · · · y k ] ∈ R p×k [140] [284] is ⎡ ⎤ y 1 vec Y = ∆ y ⎢ 2 ⎥ ⎣ . ⎦ ∈ Rpk (30) y k Then the vectorized-matrix inner-product is trace of matrix inner-product; for Z ∈ R p×k , [53,2.6.1] [173,0.3.1] [334,8] [318,2.2] where (A.1.1) 〈Y , Z〉 ∆ = tr(Y T Z) = vec(Y ) T vec Z (31) tr(Y T Z) = tr(ZY T ) = tr(YZ T ) = tr(Z T Y ) = 1 T (Y ◦ Z)1 (32)
- Page 1 and 2: DATTORRO CONVEX OPTIMIZATION & EUCL
- Page 3 and 4: Convex Optimization & Euclidean Dis
- Page 5 and 6: for Jennie Columba ♦ Antonio ♦
- Page 7 and 8: Prelude The constant demands of my
- Page 9 and 10: Convex Optimization & Euclidean Dis
- Page 11 and 12: CONVEX OPTIMIZATION & EUCLIDEAN DIS
- Page 13 and 14: List of Figures 1 Overview 19 1 Ori
- Page 15 and 16: LIST OF FIGURES 15 3 Geometry of co
- Page 17 and 18: LIST OF FIGURES 17 126 Decomposing
- Page 19 and 20: Chapter 1 Overview Convex Optimizat
- Page 21 and 22: ˇx 4 ˇx 3 ˇx 2 Figure 2: Applica
- Page 23 and 24: 23 Figure 4: This coarsely discreti
- Page 25 and 26: ases (biorthogonal expansion). We e
- Page 27 and 28: 27 Figure 7: These bees construct a
- Page 29 and 30: that establish its membership to th
- Page 31 and 32: 31 appendices Provided so as to be
- Page 33 and 34: Chapter 2 Convex geometry Convexity
- Page 35 and 36: 2.1. CONVEX SET 35 2.1.2 linear ind
- Page 37 and 38: 2.1. CONVEX SET 37 2.1.6 empty set
- Page 39 and 40: 2.1. CONVEX SET 39 2.1.7.1 Line int
- Page 41 and 42: 2.1. CONVEX SET 41 (a) R 2 (b) R 3
- Page 43: 2.1. CONVEX SET 43 This theorem in
- Page 47 and 48: 2.2. VECTORIZED-MATRIX INNER PRODUC
- Page 49 and 50: 2.2. VECTORIZED-MATRIX INNER PRODUC
- Page 51 and 52: 2.2. VECTORIZED-MATRIX INNER PRODUC
- Page 53 and 54: 2.2. VECTORIZED-MATRIX INNER PRODUC
- Page 55 and 56: 2.3. HULLS 55 Figure 16: Convex hul
- Page 57 and 58: 2.3. HULLS 57 The affine hull of tw
- Page 59 and 60: 2.3. HULLS 59 2.3.2 Convex hull The
- Page 61 and 62: 2.3. HULLS 61 In case k = N , the F
- Page 63 and 64: 2.3. HULLS 63 2.3.2.0.3 Exercise. C
- Page 65 and 66: 2.3. HULLS 65 Figure 20: A simplici
- Page 67 and 68: 2.4. HALFSPACE, HYPERPLANE 67 H + a
- Page 69 and 70: 2.4. HALFSPACE, HYPERPLANE 69 1 1
- Page 71 and 72: 2.4. HALFSPACE, HYPERPLANE 71 Recal
- Page 73 and 74: 2.4. HALFSPACE, HYPERPLANE 73 C H
- Page 75 and 76: 2.4. HALFSPACE, HYPERPLANE 75 2.4.2
- Page 77 and 78: 2.4. HALFSPACE, HYPERPLANE 77 (conf
- Page 79 and 80: 2.5. SUBSPACE REPRESENTATIONS 79 Ra
- Page 81 and 82: 2.5. SUBSPACE REPRESENTATIONS 81 If
- Page 83 and 84: 2.6. EXTREME, EXPOSED 83 2.6 Extrem
- Page 85 and 86: 2.6. EXTREME, EXPOSED 85 A B C D Fi
- Page 87 and 88: 2.7. CONES 87 2.6.1.3.1 Definition.
- Page 89 and 90: 2.7. CONES 89 0 Figure 30: Boundary
- Page 91 and 92: 2.7. CONES 91 2.7.2 Convex cone We
- Page 93 and 94: 2.7. CONES 93 Then a pointed closed
2.2. VECTORIZED-MATRIX INNER PRODUCT 45<br />
2.2 Vectorized-matrix inner product<br />
Euclidean space R n comes equipped with a linear vector inner-product<br />
〈y,z〉 ∆ = y T z (26)<br />
We prefer those angle brackets to connote a geometric rather than algebraic<br />
perspective; e.g., vector y might represent a hyperplane normal (2.4.2).<br />
Two vectors are orthogonal (perpendicular) to one another if and only if<br />
their inner product vanishes;<br />
y ⊥ z ⇔ 〈y,z〉 = 0 (27)<br />
When orthogonal vectors each have unit norm, then they are orthonormal.<br />
A vector inner-product defines Euclidean norm (vector 2-norm)<br />
‖y‖ 2 = ‖y‖ ∆ = √ y T y , ‖y‖ = 0 ⇔ y = 0 (28)<br />
For linear operation A on a vector, represented by a real matrix, the adjoint<br />
operation A T is transposition and defined for matrix A by [197,3.10]<br />
〈y,A T z〉 ∆ = 〈Ay,z〉 (29)<br />
The vector inner-product for matrices is calculated just as it is for vectors;<br />
by first transforming a matrix in R p×k to a vector in R pk by concatenating<br />
its columns in the natural order. For lack of a better term, we shall call<br />
that linear bijective (one-to-one and onto [197, App.A1.2]) transformation<br />
vectorization. For example, the vectorization of Y = [y 1 y 2 · · · y k ] ∈ R p×k<br />
[140] [284] is<br />
⎡ ⎤<br />
y 1<br />
vec Y =<br />
∆ y<br />
⎢ 2<br />
⎥<br />
⎣ . ⎦ ∈ Rpk (30)<br />
y k<br />
Then the vectorized-matrix inner-product is trace of matrix inner-product;<br />
for Z ∈ R p×k , [53,2.6.1] [173,0.3.1] [334,8] [318,2.2]<br />
where (A.1.1)<br />
〈Y , Z〉 ∆ = tr(Y T Z) = vec(Y ) T vec Z (31)<br />
tr(Y T Z) = tr(ZY T ) = tr(YZ T ) = tr(Z T Y ) = 1 T (Y ◦ Z)1 (32)