v2009.01.01 - Convex Optimization

v2009.01.01 - Convex Optimization v2009.01.01 - Convex Optimization

convexoptimization.com
from convexoptimization.com More from this publisher
10.03.2015 Views

682 APPENDIX E. PROJECTION where the convex cone has vertex-description (2.12.2.0.1), for A∈ R n×N K = {Ay | y ≽ 0} (1912) and where ‖y‖ ∞ ≤ 1 is the artificial bound. This is a convex optimization problem having no known closed-form solution, in general. It arises, for example, in the fitting of hearing aids designed around a programmable graphic equalizer (a filter bank whose only adjustable parameters are gain per band each bounded above by unity). [84] The problem is equivalent to a Schur-form semidefinite program (3.1.7.2) minimize y∈R N , t∈R subject to t [ tI x − Ay (x − Ay) T t ] ≽ 0 (1913) 0 ≼ y ≼ 1 E.9.3 nonexpansivity E.9.3.0.1 Theorem. Nonexpansivity. [149,2] [92,5.3] When C ⊂ R n is an arbitrary closed convex set, projector P projecting on C is nonexpansive in the sense: for any vectors x,y ∈ R n ‖Px − Py‖ ≤ ‖x − y‖ (1914) with equality when x −Px = y −Py . E.17 ⋄ Proof. [47] ‖x − y‖ 2 = ‖Px − Py‖ 2 + ‖(I − P )x − (I − P )y‖ 2 + 2〈x − Px, Px − Py〉 + 2〈y − Py , Py − Px〉 (1915) Nonnegativity of the last two terms follows directly from the unique minimum-distance projection theorem (E.9.1.0.2). E.17 This condition for equality corrects an error in [69] (where the norm is applied to each side of the condition given here) easily revealed by counter-example.

E.9. PROJECTION ON CONVEX SET 683 The foregoing proof reveals another flavor of nonexpansivity; for each and every x,y ∈ R n ‖Px − Py‖ 2 + ‖(I − P )x − (I − P )y‖ 2 ≤ ‖x − y‖ 2 (1916) Deutsch shows yet another: [92,5.5] E.9.4 ‖Px − Py‖ 2 ≤ 〈x − y , Px − Py〉 (1917) Easy projections Projecting any matrix H ∈ R n×n in the Euclidean/Frobenius sense orthogonally on the subspace of symmetric matrices S n in isomorphic R n2 amounts to taking the symmetric part of H ; (2.2.2.0.1) id est, (H+H T )/2 is the projection. To project any H ∈ R n×n orthogonally on the symmetric hollow subspace S n h in isomorphic Rn2 (2.2.3.0.1), we may take the symmetric part and then zero all entries along the main diagonal, or vice versa (because this is projection on the intersection of two subspaces); id est, (H + H T )/2 − δ 2 (H) . To project a matrix on the nonnegative orthant R m×n + , simply clip all negative entries to 0. Likewise, projection on the nonpositive orthant R m×n − sees all positive entries clipped to 0. Projection on other orthants is equally simple with appropriate clipping. Projecting on hyperplane, halfspace, slab:E.5.0.0.8. Projection of y ∈ R n on Euclidean ball B = {x∈ R n | ‖x − a‖ ≤ c} : for y ≠ a , P B y = (y − a) + a . c ‖y−a‖ Clipping in excess of |1| each entry of a point x∈ R n is equivalent to unique minimum-distance projection of x on a hypercube centered at the origin. (conferE.10.3.2) Projection of x∈ R n on a (rectangular) hyperbox: [53,8.1.1] C = {y ∈ R n | l ≼ y ≼ u, l ≺ u} (1918) ⎧ ⎨ l k , x k ≤ l k P(x) k=0...n = x k , l k ≤ x k ≤ u k (1919) ⎩ u k , x k ≥ u k

682 APPENDIX E. PROJECTION<br />

where the convex cone has vertex-description (2.12.2.0.1), for A∈ R n×N<br />

K = {Ay | y ≽ 0} (1912)<br />

and where ‖y‖ ∞ ≤ 1 is the artificial bound. This is a convex optimization<br />

problem having no known closed-form solution, in general. It arises, for<br />

example, in the fitting of hearing aids designed around a programmable<br />

graphic equalizer (a filter bank whose only adjustable parameters are gain<br />

per band each bounded above by unity). [84] The problem is equivalent to a<br />

Schur-form semidefinite program (3.1.7.2)<br />

minimize<br />

y∈R N , t∈R<br />

subject to<br />

t<br />

[<br />

tI x − Ay<br />

(x − Ay) T t<br />

]<br />

≽ 0<br />

(1913)<br />

0 ≼ y ≼ 1<br />

<br />

E.9.3<br />

nonexpansivity<br />

E.9.3.0.1 Theorem. Nonexpansivity. [149,2] [92,5.3]<br />

When C ⊂ R n is an arbitrary closed convex set, projector P projecting on C<br />

is nonexpansive in the sense: for any vectors x,y ∈ R n<br />

‖Px − Py‖ ≤ ‖x − y‖ (1914)<br />

with equality when x −Px = y −Py . E.17<br />

⋄<br />

Proof. [47]<br />

‖x − y‖ 2 = ‖Px − Py‖ 2 + ‖(I − P )x − (I − P )y‖ 2<br />

+ 2〈x − Px, Px − Py〉 + 2〈y − Py , Py − Px〉<br />

(1915)<br />

Nonnegativity of the last two terms follows directly from the unique<br />

minimum-distance projection theorem (E.9.1.0.2).<br />

<br />

E.17 This condition for equality corrects an error in [69] (where the norm is applied to each<br />

side of the condition given here) easily revealed by counter-example.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!