10.03.2015 Views

v2009.01.01 - Convex Optimization

v2009.01.01 - Convex Optimization

v2009.01.01 - Convex Optimization

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

D.1. DIRECTIONAL DERIVATIVE, TAYLOR SERIES 613<br />

D.1.2<br />

Product rules for matrix-functions<br />

Given dimensionally compatible matrix-valued functions of matrix variable<br />

f(X) and g(X)<br />

while [46,8.3] [273]<br />

∇ X<br />

(<br />

f(X) T g(X) ) = ∇ X (f)g + ∇ X (g)f (1649)<br />

∇ X tr ( f(X) T g(X) ) = ∇ X<br />

(tr ( f(X) T g(Z) ) + tr ( g(X)f(Z) T))∣ ∣<br />

∣Z←X (1650)<br />

These expressions implicitly apply as well to scalar-, vector-, or matrix-valued<br />

functions of scalar, vector, or matrix arguments.<br />

D.1.2.0.1 Example. Cubix.<br />

Suppose f(X) : R 2×2 →R 2 = X T a and g(X) : R 2×2 →R 2 = Xb . We wish<br />

to find<br />

∇ X<br />

(<br />

f(X) T g(X) ) = ∇ X a T X 2 b (1651)<br />

using the product rule. Formula (1649) calls for<br />

∇ X a T X 2 b = ∇ X (X T a)Xb + ∇ X (Xb)X T a (1652)<br />

Consider the first of the two terms:<br />

∇ X (f)g = ∇ X (X T a)Xb<br />

= [ ∇(X T a) 1 ∇(X T a) 2<br />

]<br />

Xb<br />

(1653)<br />

The gradient of X T a forms a cubix in R 2×2×2 .<br />

⎡<br />

∇ X (X T a)Xb =<br />

⎢<br />

⎣<br />

∂(X T a) 1<br />

∂X 11<br />

<br />

∂(X T a) 1<br />

∂X 21<br />

<br />

∂(X T a) 2<br />

∂X 11<br />

<br />

∂(X T a) 1<br />

∂(X T a) 2<br />

∂X 12 ∂X 12<br />

∂(X T a) 2<br />

∂X 21<br />

<br />

∂(X T a) 1<br />

∂(X T a) 2<br />

∂X 22 ∂X 22<br />

⎤<br />

⎡<br />

⎢<br />

⎣<br />

⎥<br />

⎦<br />

(1654)<br />

⎤<br />

(Xb) 1<br />

⎥<br />

⎦ ∈ R 2×1×2<br />

(Xb) 2

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!