Linear Algebra Exercises-n-Answers.pdf
Linear Algebra Exercises-n-Answers.pdf Linear Algebra Exercises-n-Answers.pdf
114 Linear Algebra, by Hefferon Three.IV.3.30 It is false; these two don’t ( commute. ) ( ) 1 0 0 0 0 0 1 0 Three.IV.3.31 A permutation matrix has a single one in each row and column, and all its other entries are zeroes. Fix such a matrix. Suppose that the i-th row has its one in its j-th column. Then no other row has its one in the j-th column; every other row has a zero in the j-th column. Thus the dot product of the i-th row and any other row is zero. The i-th row of the product is made up of the dot products of the i-th row of the matrix and the columns of the transpose. By the last paragraph, all such dot products are zero except for the i-th one, which is one. Three.IV.3.32 The generalization is to go from the first and second rows to the i 1 -th and i 2 -th rows. Row i of GH is made up of the dot products of row i of G and the columns of H. Thus if rows i 1 and i 2 of G are equal then so are rows i 1 and i 2 of GH. Three.IV.3.33 If the product of two diagonal matrices is defined — if both are n×n — then the product of the diagonals is the diagonal of the products: where G, H are equal-sized diagonal matrices, GH is all zeros except each that i, i entry is g i,i h i,i . Three.IV.3.34 One way to produce this matrix from the identity is to use the column operations of first multiplying the second column by three, and then adding the negative of the resulting second column to the first. ( ) ( ) ( ) 1 0 1 0 1 0 −→ −→ 0 1 0 3 −3 3 Column operations, in contrast with row operations) are written from left to right, so doing the above two operations is expressed with this matrix ( product. ) ( ) 1 0 1 0 0 3 −1 1 Remark. Alternatively, we could get the required matrix with row operations. Starting with the identity, first adding the negative of the first row to the second, and then multiplying the second row by three will work. Because successive row operations are written as matrix products from right to left, doing these two row operations is expressed with: the same matrix product. Three.IV.3.35 The i-th row of GH is made up of the dot products of the i-th row of G with the columns of H. The dot product of a zero row with a column is zero. It works for columns if stated correctly: if H has a column of zeros then GH (if defined) has a column of zeros. The proof is easy. Three.IV.3.36 Perhaps the easiest way is to show that each n×m matrix is a linear combination of unit matrices in ⎛ one and only ⎞ one way: ⎛ ⎞ ⎛ ⎞ 1 0 . . . 0 0 . . . a 1,1 a 1,2 . . . ⎜ c 1 ⎝ 0 0 ⎟ ⎜ ⎠ + · · · + c n,m ⎝ ⎟ ⎜ . . ⎠ = ⎝ ⎟ . ⎠ . 0 . . . 1 a n,1 . . . a n,m has the unique solution c 1 = a 1,1 , c 2 = a 1,2 , etc. Three.IV.3.37 Call that matrix F (. We) have ( ) ( ) F 2 2 1 = F 3 3 2 = F 4 5 3 = 1 1 2 1 3 2 In general, ( ) F n fn+1 f = n f n f n−1 where f i is the i-th Fibonacci number f i = f i−1 + f i−2 and f 0 = 0, f 1 = 1, which is verified by induction, based on this equation. ( ) ( ) ( ) fi−1 f i−2 1 1 fi f = i−1 f i−2 f i−3 1 0 f i−1 f i−2 Three.IV.3.38 Chapter Five gives a less computational reason — the trace of a matrix is the second coefficient in its characteristic polynomial — but for now we can use indices. We have trace(GH) = (g 1,1 h 1,1 + g 1,2 h 2,1 + · · · + g 1,n h n,1 ) + (g 2,1 h 1,2 + g 2,2 h 2,2 + · · · + g 2,n h n,2 ) + · · · + (g n,1 h 1,n + g n,2 h 2,n + · · · + g n,n h n,n )
Answers to Exercises 115 while and the two are equal. trace(HG) = (h 1,1 g 1,1 + h 1,2 g 2,1 + · · · + h 1,n g n,1 ) + (h 2,1 g 1,2 + h 2,2 g 2,2 + · · · + h 2,n g n,2 ) + · · · + (h n,1 g 1,n + h n,2 g 2,n + · · · + h n,n g n,n ) Three.IV.3.39 A matrix is upper triangular if and only if its i, j entry is zero whenever i > j. Thus, if G, H are upper triangular then h i,j and g i,j are zero when i > j. An entry in the product p i,j = g i,1 h 1,j + · · · + g i,n h n,j is zero unless at least some of the terms are nonzero, that is, unless for at least some of the summands g i,r h r,j both i ≤ r and r ≤ j. Of course, if i > j this cannot happen and so the product of two upper triangular matrices is upper triangular. (A similar argument works for lower triangular matrices.) Three.IV.3.40 Three.IV.3.41 and will do. The sum along the i-th row of the product is this. p i,1 + · · · + p i,n = (h i,1 g 1,1 + h i,2 g 2,1 + · · · + h i,n g n,1 ) + (h i,1 g 1,2 + h i,2 g 2,2 + · · · + h i,n g n,2 ) + · · · + (h i,1 g 1,n + h i,2 g 2,n + · · · + h i,n g n,n ) = h i,1 (g 1,1 + g 1,2 + · · · + g 1,n ) + h i,2 (g 2,1 + g 2,2 + · · · + g 2,n ) + · · · + h i,n (g n,1 + g n,2 + · · · + g n,n ) = h i,1 · 1 + · · · + h i,n · 1 = 1 Matrices representing (say, with respect to E 2 , E 2 ⊂ R 2 ) the maps that send ⃗β 1 ⃗β 1 h ↦−→ β ⃗ 1 β2 ⃗ h ↦−→ ⃗0 g ↦−→ β ⃗ g 2 β2 ⃗ ↦−→ ⃗0 Three.IV.3.42 The combination is to have all entries of the matrix be zero except for one (possibly) nonzero entry in each row and column. Such a matrix can be written as the product of a permutation matrix and a diagonal matrix, e.g., ⎛ ⎝ 0 4 0 ⎞ ⎛ 2 0 0 ⎠ = ⎝ 0 1 0 ⎞ ⎛ 1 0 0⎠ ⎝ 4 0 0 ⎞ 0 2 0 ⎠ 0 0 −5 0 0 1 0 0 −5 and its action is thus to rescale the rows and permute them. Three.IV.3.43 (a) Each entry p i,j = g i,1 h 1,j + · · · + g 1,r h r,1 takes r multiplications and there are m · n entries. Thus there are m · n · r multiplications. (b) Let H 1 be 5×10, let H 2 be 10×20, let H 3 be 20×5, let H 4 be 5×1. Then, using the formula from the prior part, this association uses this many multiplications ((H 1 H 2 )H 3 )H 4 1000 + 500 + 25 = 1525 (H 1 (H 2 H 3 ))H 4 1000 + 250 + 25 = 1275 (H 1 H 2 )(H 3 H 4 ) 1000 + 100 + 100 = 1200 H 1 (H 2 (H 3 H 4 )) 100 + 200 + 50 = 350 H 1 ((H 2 H 3 )H 4 ) 1000 + 50 + 50 = 1100 shows which is cheapest. (c) This is reported by Knuth as an improvement by S. Winograd of a formula due to V. Strassen: where w = aA − (a − c − d)(A − C + D), ( a ) ( b A ) B c d C D ( ) aA + bB w + (c + d)(C − A) + (a + b − c − d)D = w + (a − c)(D − C) − d(A − B − C + D) w + (a − c)(D − C) + (c + d)(C − A) takes seven multiplications and fifteen additions (save the intermediate results).
- Page 66 and 67: 64 Linear Algebra, by Hefferon Two.
- Page 68 and 69: 66 Linear Algebra, by Hefferon But
- Page 70 and 71: 68 Linear Algebra, by Hefferon (b)
- Page 72 and 73: 70 Linear Algebra, by Hefferon 2 Fo
- Page 74 and 75: 72 Linear Algebra, by Hefferon -a T
- Page 76 and 77: 74 Linear Algebra, by Hefferon 3 (a
- Page 78 and 79: 76 Linear Algebra, by Hefferon and
- Page 80 and 81: 78 Linear Algebra, by Hefferon (d)
- Page 82 and 83: 80 Linear Algebra, by Hefferon Thre
- Page 84 and 85: 82 Linear Algebra, by Hefferon has
- Page 86 and 87: 84 Linear Algebra, by Hefferon (c)
- Page 88 and 89: 86 Linear Algebra, by Hefferon (c)
- Page 90 and 91: 88 Linear Algebra, by Hefferon Thre
- Page 92 and 93: 90 Linear Algebra, by Hefferon ∣
- Page 94 and 95: 92 Linear Algebra, by Hefferon Thre
- Page 96 and 97: 94 Linear Algebra, by Hefferon Thre
- Page 98 and 99: 96 Linear Algebra, by Hefferon and,
- Page 100 and 101: 98 Linear Algebra, by Hefferon The
- Page 102 and 103: 100 Linear Algebra, by Hefferon (c)
- Page 104 and 105: 102 Linear Algebra, by Hefferon mea
- Page 106 and 107: 104 Linear Algebra, by Hefferon (c)
- Page 108 and 109: 106 Linear Algebra, by Hefferon Thr
- Page 110 and 111: 108 Linear Algebra, by Hefferon (e)
- Page 112 and 113: 110 Linear Algebra, by Hefferon (a)
- Page 114 and 115: 112 Linear Algebra, by Hefferon to
- Page 118 and 119: 116 Linear Algebra, by Hefferon Thr
- Page 120 and 121: 118 Linear Algebra, by Hefferon As
- Page 122 and 123: 120 Linear Algebra, by Hefferon and
- Page 124 and 125: 122 Linear Algebra, by Hefferon we
- Page 126 and 127: 124 Linear Algebra, by Hefferon (b)
- Page 128 and 129: 126 Linear Algebra, by Hefferon As
- Page 130 and 131: 128 Linear Algebra, by Hefferon (d)
- Page 132 and 133: 130 Linear Algebra, by Hefferon the
- Page 134 and 135: 132 Linear Algebra, by Hefferon Thr
- Page 136 and 137: and that (⃗v − (c1 · ⃗κ 1 +
- Page 138 and 139: 136 Linear Algebra, by Hefferon (a)
- Page 140 and 141: 138 Linear Algebra, by Hefferon the
- Page 142 and 143: 140 Linear Algebra, by Hefferon and
- Page 144 and 145: 142 Linear Algebra, by Hefferon and
- Page 146 and 147: 144 Linear Algebra, by Hefferon Pro
- Page 148 and 149: 146 Linear Algebra, by Hefferon 5 T
- Page 150 and 151: 148 Linear Algebra, by Hefferon Top
- Page 152 and 153: 150 Linear Algebra, by Hefferon 4 R
- Page 154 and 155: 152 Linear Algebra, by Hefferon > 0
- Page 156 and 157: 154 Linear Algebra, by Hefferon 0.5
- Page 158 and 159: 156 Linear Algebra, by Hefferon n =
- Page 160 and 161: 158 Linear Algebra, by Hefferon Top
- Page 162 and 163: 160 Linear Algebra, by Hefferon and
- Page 164 and 165: 162 Linear Algebra, by Hefferon ∣
<strong>Answers</strong> to <strong>Exercises</strong> 115<br />
while<br />
and the two are equal.<br />
trace(HG) = (h 1,1 g 1,1 + h 1,2 g 2,1 + · · · + h 1,n g n,1 )<br />
+ (h 2,1 g 1,2 + h 2,2 g 2,2 + · · · + h 2,n g n,2 )<br />
+ · · · + (h n,1 g 1,n + h n,2 g 2,n + · · · + h n,n g n,n )<br />
Three.IV.3.39 A matrix is upper triangular if and only if its i, j entry is zero whenever i > j.<br />
Thus, if G, H are upper triangular then h i,j and g i,j are zero when i > j. An entry in the product<br />
p i,j = g i,1 h 1,j + · · · + g i,n h n,j is zero unless at least some of the terms are nonzero, that is, unless for<br />
at least some of the summands g i,r h r,j both i ≤ r and r ≤ j. Of course, if i > j this cannot happen<br />
and so the product of two upper triangular matrices is upper triangular. (A similar argument works<br />
for lower triangular matrices.)<br />
Three.IV.3.40<br />
Three.IV.3.41<br />
and<br />
will do.<br />
The sum along the i-th row of the product is this.<br />
p i,1 + · · · + p i,n = (h i,1 g 1,1 + h i,2 g 2,1 + · · · + h i,n g n,1 )<br />
+ (h i,1 g 1,2 + h i,2 g 2,2 + · · · + h i,n g n,2 )<br />
+ · · · + (h i,1 g 1,n + h i,2 g 2,n + · · · + h i,n g n,n )<br />
= h i,1 (g 1,1 + g 1,2 + · · · + g 1,n )<br />
+ h i,2 (g 2,1 + g 2,2 + · · · + g 2,n )<br />
+ · · · + h i,n (g n,1 + g n,2 + · · · + g n,n )<br />
= h i,1 · 1 + · · · + h i,n · 1<br />
= 1<br />
Matrices representing (say, with respect to E 2 , E 2 ⊂ R 2 ) the maps that send<br />
⃗β 1<br />
⃗β 1<br />
h<br />
↦−→ β ⃗ 1 β2 ⃗<br />
h<br />
↦−→ ⃗0<br />
g<br />
↦−→ β ⃗ g<br />
2 β2 ⃗ ↦−→ ⃗0<br />
Three.IV.3.42 The combination is to have all entries of the matrix be zero except for one (possibly)<br />
nonzero entry in each row and column. Such a matrix can be written as the product of a permutation<br />
matrix and a diagonal matrix, e.g.,<br />
⎛<br />
⎝ 0 4 0<br />
⎞ ⎛<br />
2 0 0 ⎠ = ⎝ 0 1 0<br />
⎞ ⎛<br />
1 0 0⎠<br />
⎝ 4 0 0<br />
⎞<br />
0 2 0 ⎠<br />
0 0 −5 0 0 1 0 0 −5<br />
and its action is thus to rescale the rows and permute them.<br />
Three.IV.3.43 (a) Each entry p i,j = g i,1 h 1,j + · · · + g 1,r h r,1 takes r multiplications and there are<br />
m · n entries. Thus there are m · n · r multiplications.<br />
(b) Let H 1 be 5×10, let H 2 be 10×20, let H 3 be 20×5, let H 4 be 5×1. Then, using the formula<br />
from the prior part,<br />
this association uses this many multiplications<br />
((H 1 H 2 )H 3 )H 4 1000 + 500 + 25 = 1525<br />
(H 1 (H 2 H 3 ))H 4 1000 + 250 + 25 = 1275<br />
(H 1 H 2 )(H 3 H 4 ) 1000 + 100 + 100 = 1200<br />
H 1 (H 2 (H 3 H 4 )) 100 + 200 + 50 = 350<br />
H 1 ((H 2 H 3 )H 4 ) 1000 + 50 + 50 = 1100<br />
shows which is cheapest.<br />
(c) This is reported by Knuth as an improvement by S. Winograd of a formula due to V. Strassen:<br />
where w = aA − (a − c − d)(A − C + D),<br />
(<br />
a<br />
) (<br />
b A<br />
)<br />
B<br />
c d C D<br />
(<br />
)<br />
aA + bB<br />
w + (c + d)(C − A) + (a + b − c − d)D<br />
=<br />
w + (a − c)(D − C) − d(A − B − C + D) w + (a − c)(D − C) + (c + d)(C − A)<br />
takes seven multiplications and fifteen additions (save the intermediate results).