v2009.01.01 - Convex Optimization

v2009.01.01 - Convex Optimization v2009.01.01 - Convex Optimization

convexoptimization.com
from convexoptimization.com More from this publisher
10.03.2015 Views

44 CHAPTER 2. CONVEX GEOMETRY 2.1.9 inverse image While epigraph and sublevel sets (3.1.7) of a convex function must be convex, it generally holds that image and inverse image of a convex function are not. Although there are many examples to the contrary, the most prominent are the affine functions: 2.1.9.0.1 Theorem. Image, Inverse image. [266,3] Let f be a mapping from R p×k to R m×n . The image of a convex set C under any affine function (3.1.6) is convex. The inverse image 2.8 of a convex set F , f(C) = {f(X) | X ∈ C} ⊆ R m×n (24) f −1 (F) = {X | f(X)∈ F} ⊆ R p×k (25) a single- or many-valued mapping, under any affine function f is convex. ⋄ In particular, any affine transformation of an affine set remains affine. [266, p.8] Ellipsoids are invariant to any [sic] affine transformation. Each converse of this two-part theorem is generally false; id est, given f affine, a convex image f(C) does not imply that set C is convex, and neither does a convex inverse image f −1 (F) imply set F is convex. A counter-example is easy to visualize when the affine function is an orthogonal projector [287] [215]: 2.1.9.0.2 Corollary. Projection on subspace. 2.9 (1809) [266,3] Orthogonal projection of a convex set on a subspace or nonempty affine set is another convex set. ⋄ Again, the converse is false. Shadows, for example, are umbral projections that can be convex when the body providing the shade is not. 2.8 See Example 2.9.1.0.2 or Example 3.1.7.0.2 for an application. 2.9 For hyperplane representations see2.4.2. For projection of convex sets on hyperplanes see [324,6.6]. A nonempty affine set is called an affine subset2.3.1. Orthogonal projection of points on affine subsets is reviewed inE.4.

2.2. VECTORIZED-MATRIX INNER PRODUCT 45 2.2 Vectorized-matrix inner product Euclidean space R n comes equipped with a linear vector inner-product 〈y,z〉 ∆ = y T z (26) We prefer those angle brackets to connote a geometric rather than algebraic perspective; e.g., vector y might represent a hyperplane normal (2.4.2). Two vectors are orthogonal (perpendicular) to one another if and only if their inner product vanishes; y ⊥ z ⇔ 〈y,z〉 = 0 (27) When orthogonal vectors each have unit norm, then they are orthonormal. A vector inner-product defines Euclidean norm (vector 2-norm) ‖y‖ 2 = ‖y‖ ∆ = √ y T y , ‖y‖ = 0 ⇔ y = 0 (28) For linear operation A on a vector, represented by a real matrix, the adjoint operation A T is transposition and defined for matrix A by [197,3.10] 〈y,A T z〉 ∆ = 〈Ay,z〉 (29) The vector inner-product for matrices is calculated just as it is for vectors; by first transforming a matrix in R p×k to a vector in R pk by concatenating its columns in the natural order. For lack of a better term, we shall call that linear bijective (one-to-one and onto [197, App.A1.2]) transformation vectorization. For example, the vectorization of Y = [y 1 y 2 · · · y k ] ∈ R p×k [140] [284] is ⎡ ⎤ y 1 vec Y = ∆ y ⎢ 2 ⎥ ⎣ . ⎦ ∈ Rpk (30) y k Then the vectorized-matrix inner-product is trace of matrix inner-product; for Z ∈ R p×k , [53,2.6.1] [173,0.3.1] [334,8] [318,2.2] where (A.1.1) 〈Y , Z〉 ∆ = tr(Y T Z) = vec(Y ) T vec Z (31) tr(Y T Z) = tr(ZY T ) = tr(YZ T ) = tr(Z T Y ) = 1 T (Y ◦ Z)1 (32)

2.2. VECTORIZED-MATRIX INNER PRODUCT 45<br />

2.2 Vectorized-matrix inner product<br />

Euclidean space R n comes equipped with a linear vector inner-product<br />

〈y,z〉 ∆ = y T z (26)<br />

We prefer those angle brackets to connote a geometric rather than algebraic<br />

perspective; e.g., vector y might represent a hyperplane normal (2.4.2).<br />

Two vectors are orthogonal (perpendicular) to one another if and only if<br />

their inner product vanishes;<br />

y ⊥ z ⇔ 〈y,z〉 = 0 (27)<br />

When orthogonal vectors each have unit norm, then they are orthonormal.<br />

A vector inner-product defines Euclidean norm (vector 2-norm)<br />

‖y‖ 2 = ‖y‖ ∆ = √ y T y , ‖y‖ = 0 ⇔ y = 0 (28)<br />

For linear operation A on a vector, represented by a real matrix, the adjoint<br />

operation A T is transposition and defined for matrix A by [197,3.10]<br />

〈y,A T z〉 ∆ = 〈Ay,z〉 (29)<br />

The vector inner-product for matrices is calculated just as it is for vectors;<br />

by first transforming a matrix in R p×k to a vector in R pk by concatenating<br />

its columns in the natural order. For lack of a better term, we shall call<br />

that linear bijective (one-to-one and onto [197, App.A1.2]) transformation<br />

vectorization. For example, the vectorization of Y = [y 1 y 2 · · · y k ] ∈ R p×k<br />

[140] [284] is<br />

⎡ ⎤<br />

y 1<br />

vec Y =<br />

∆ y<br />

⎢ 2<br />

⎥<br />

⎣ . ⎦ ∈ Rpk (30)<br />

y k<br />

Then the vectorized-matrix inner-product is trace of matrix inner-product;<br />

for Z ∈ R p×k , [53,2.6.1] [173,0.3.1] [334,8] [318,2.2]<br />

where (A.1.1)<br />

〈Y , Z〉 ∆ = tr(Y T Z) = vec(Y ) T vec Z (31)<br />

tr(Y T Z) = tr(ZY T ) = tr(YZ T ) = tr(Z T Y ) = 1 T (Y ◦ Z)1 (32)

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!