12.07.2015 Views

v2010.10.26 - Convex Optimization

v2010.10.26 - Convex Optimization

v2010.10.26 - Convex Optimization

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

3.6. GRADIENT 253Invertibility is guaranteed for any finite positive value of t by (1451). Thenmatrix X becomes the pseudoinverse X → A † X ⋆ in the limit t → 0 + .Minimizing instead ‖AX − I‖ 2 F yields the second flavor in (1869). 3.6.0.0.3 Example. Hyperplane, line, described by affine function.Consider the real affine function of vector variable, (confer Figure 73)f(x) : R p → R = a T x + b (598)whose domain is R p and whose gradient ∇f(x)=a is a constant vector(independent of x). This function describes the real line R (its range), andit describes a nonvertical [199,B.1.2] hyperplane ∂H in the space R p × Rfor any particular vector a (confer2.4.2);{[∂H =having nonzero normalxa T x + bη =[ a−1] }| x∈ R p ⊂ R p ×R (599)]∈ R p ×R (600)This equivalence to a hyperplane holds only for real functions. [ ]3.19 EpigraphRpof real affine function f(x) is therefore a halfspace in , so we have:RThe real affine function is to convex functionsasthe hyperplane is to convex sets.3.19 To prove that, consider a vector-valued affine functionf(x) : R p →R M = Ax + bhaving gradient ∇f(x)=A T ∈ R p×M : The affine set{[ ] }x| x∈ R p ⊂ R p ×R MAx + bis perpendicular to [ ]∇f(x)η ∈ R p×M × R M×M−Ibecause([ ] [ ])η T x 0− = 0 ∀x ∈ R pAx + b bYet η is a vector (in R p ×R M ) only when M = 1.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!