12.07.2015 Views

v2010.10.26 - Convex Optimization

v2010.10.26 - Convex Optimization

v2010.10.26 - Convex Optimization

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

3.6. GRADIENT 2553.6.1.0.2 Theorem. Gradient monotonicity. [199,B.4.1.4][55,3.1 exer.20] Given real differentiable function f(X) : R p×k →R withmatrix argument on open convex domain, the condition〈∇f(Y ) − ∇f(X) , Y − X〉 ≥ 0 for each and every X,Y ∈ domf (605)is necessary and sufficient for convexity of f . Strict inequality and caveatY ≠ X provide necessary and sufficient conditions for strict convexity. ⋄3.6.1.0.3 Example. Composition of functions. [61,3.2.4] [199,B.2.1]Monotonic functions play a vital role determining convexity of functionsconstructed by transformation. Given functions g : R k → R andh : R n → R k , their composition f = g(h) : R n → R defined byis convex iff(x) = g(h(x)) , domf = {x∈ dom h | h(x)∈ domg} (606)g is convex nondecreasing monotonic and h is convexg is convex nonincreasing monotonic and h is concaveand composite function f is concave ifg is concave nondecreasing monotonic and h is concaveg is concave nonincreasing monotonic and h is convexwhere ∞ (−∞) is assigned to convex (concave) g when evaluated outside itsdomain. For differentiable functions, these rules are consequent to (1790).<strong>Convex</strong>ity (concavity) of any g is preserved when h is affine. If f and g are nonnegative convex real functions, then (f(x) k + g(x) k ) 1/kis also convex for integer k ≥1. [249, p.44] A squared norm is convex havingthe same minimum because a squaring operation is convex nondecreasingmonotonic on the nonnegative real line.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!