12.07.2015 Views

v2010.10.26 - Convex Optimization

v2010.10.26 - Convex Optimization

v2010.10.26 - Convex Optimization

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

D.1. DIRECTIONAL DERIVATIVE, TAYLOR SERIES 675for which A ⊗ 1 = 1 ⊗ A = A (real unity acts like identity).One advantage to vectorization is existence of a traditionaltwo-dimensional matrix representation (second-order tensor) for thesecond-order gradient of a real function with respect to a vectorized matrix.For example, fromA.1.1 no.33 (D.2.1) for square A,B ∈ R n×n [166,5.2][12,3]∇ 2 vec X tr(AXBXT ) = ∇ 2 vec X vec(X)T (B T ⊗A) vec X = B⊗A T +B T ⊗A ∈ R n2 ×n 2(1781)To disadvantage is a large new but known set of algebraic rules (A.1.1)and the fact that its mere use does not generally guarantee two-dimensionalmatrix representation of gradients.Another application of the Kronecker product is to reverse order ofappearance in a matrix product: Suppose we wish to weight the columnsof a matrix S ∈ R M×N , for example, by respective entries w i from the maindiagonal in⎡W ⎣ w ⎤1 0. . . ⎦∈ S N (1782)0 T w NA conventional means for accomplishing column weighting is to multiply Sby diagonal matrix W on the right-hand side:⎡ ⎤w 1 0SW = S⎣... ⎦= [ ]S(:, 1)w 1 · · · S(:, N)w N ∈ R M×N (1783)0 T w NTo reverse product order such that diagonal matrix W instead appears tothe left of S : for I ∈ S M (Sze Wan)⎡⎤S(:, 1) 0 0SW = (δ(W) T ⊗ I) ⎢ 0 S(:, 2)...⎥⎣ ...... 0 ⎦ ∈ RM×N (1784)0 0 S(:, N)To instead weight the rows of S via diagonal matrix W ∈ S M , for I ∈ S N⎡⎤S(1, :) 0 0WS = ⎢ 0 S(2, :)...⎥⎣ ...... 0 ⎦ (δ(W) ⊗ I) ∈ RM×N (1785)0 0 S(M , :)

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!