10.07.2015 Views

v2007.09.13 - Convex Optimization

v2007.09.13 - Convex Optimization

v2007.09.13 - Convex Optimization

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

D.1. DIRECTIONAL DERIVATIVE, TAYLOR SERIES 555One advantage to vectorization is existence of a traditionaltwo-dimensional matrix representation for the second-order gradient ofa real function with respect to a vectorized matrix. For example, fromA.1.1 no.23 (D.2.1) for square A,B∈R n×n [115,5.2] [10,3]∇ 2 vec X tr(AXBXT ) = ∇ 2 vec X vec(X)T (B T ⊗A) vec X = B⊗A T +B T ⊗A ∈ R n2 ×n 2(1553)To disadvantage is a large new but known set of algebraic rules and thefact that its mere use does not generally guarantee two-dimensional matrixrepresentation of gradients.Another application of the Kronecker product is to reverse order ofappearance in a matrix product: Suppose we wish to weight the columnsof a matrix S ∈ R M×N , for example, by respective entries w i from themain-diagonal inW ∆ =⎡⎣ w ⎤1 0... ⎦∈ S N (1554)0 T w NThe conventional way of accomplishing that is to multiply S by diagonalmatrix W on the right-hand side: D.2⎡ ⎤w 1 0SW = S⎣... ⎦= [ ]S(:, 1)w 1 · · · S(:, N)w N ∈ R M×N (1555)0 T w NTo reverse product order such that diagonal matrix W instead appears tothe left of S : (Sze Wan)⎡⎤S(:, 1) 0 0SW = (δ(W) T ⊗ I) ⎢ 0 S(:, 2)...⎥⎣ ...... 0 ⎦ ∈ RM×N (1556)0 0 S(:, N)where I ∈ S M . For any matrices of like size, S,Y ∈ R M×N⎡⎤S(:, 1) 0 0S ◦Y = [ δ(Y (:, 1)) · · · δ(Y (:, N)) ] ⎢ 0 S(:, 2)...⎥⎣ ...... 0 ⎦ ∈ RM×N0 0 S(:, N)(1557)D.2 Multiplying on the left by W ∈ S M would instead weight the rows of S .

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!