23.01.2014 Views

Lecture 18 Subgradients

Lecture 18 Subgradients

Lecture 18 Subgradients

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

<strong>Lecture</strong> <strong>18</strong><br />

Optimality Conditions: Unconstrained Case<br />

Unconstrained optimization<br />

Assumption<br />

minimize f(x)<br />

• The function f is convex (non-differentiable) and proper<br />

[f proper means f(x) > −∞ for all x and dom f ≠ ∅]<br />

Theorem Under this assumption, a vector x ∗ minimizes f over R n if and<br />

only if<br />

0 ∈ ∂f(x ∗ )<br />

• The result is a generalization of ∇f(x ∗ ) = 0<br />

• Proof x ∗ is optimal if and only if f(x) ≥ f(x ∗ ) for all x, or equivalently<br />

f(x) ≥ f(x ∗ ) + 0 T (x − x ∗ ) for all x ∈ R n<br />

Thus, x ∗ is optimal if and only if 0 ∈ ∂f(x ∗ )<br />

Convex Optimization 15

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!