Linear Algebra Exercises-n-Answers.pdf
Linear Algebra Exercises-n-Answers.pdf
Linear Algebra Exercises-n-Answers.pdf
You also want an ePaper? Increase the reach of your titles
YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.
136 <strong>Linear</strong> <strong>Algebra</strong>, by Hefferon<br />
(a) c 1 = 4 (b) c 1 = 4, c 2 = 3 (c) c 1 = 4, c 2 = 3, c 3 = 2, c 4 = 1<br />
For the proof, we will do only the k = 2 case because the completely general case is messier but no<br />
more enlightening. We follow the hint (recall that for any vector ⃗w we have ‖ ⃗w ‖ 2 = ⃗w ⃗w).<br />
(<br />
0 ≤ ⃗v − ( ⃗v ⃗κ 1<br />
· ⃗κ 1 + ⃗v ⃗κ 2<br />
) ) (<br />
· ⃗κ 2 ⃗v − ( ⃗v ⃗κ 1<br />
· ⃗κ 1 + ⃗v ⃗κ 2<br />
) )<br />
· ⃗κ 2<br />
⃗κ 1 ⃗κ 1 ⃗κ 2 ⃗κ 2 ⃗κ 1 ⃗κ 1 ⃗κ 2 ⃗κ 2<br />
( ⃗v ⃗κ1<br />
= ⃗v ⃗v − 2 · ⃗v · ⃗κ 1 + ⃗v ⃗κ )<br />
2<br />
· ⃗κ 2<br />
⃗κ 1 ⃗κ 1 ⃗κ 2 ⃗κ 2<br />
( ⃗v ⃗κ1<br />
+ · ⃗κ 1 + ⃗v ⃗κ ) (<br />
2 ⃗v ⃗κ1<br />
· ⃗κ 2 · ⃗κ 1 + ⃗v ⃗κ )<br />
2<br />
· ⃗κ 2<br />
⃗κ 1 ⃗κ 1 ⃗κ 2 ⃗κ 2 ⃗κ 1 ⃗κ 1 ⃗κ 2 ⃗κ 2<br />
( ⃗v ⃗κ1<br />
= ⃗v ⃗v − 2 · · (⃗v ⃗κ 1 ) + ⃗v ⃗κ ) (<br />
2<br />
· (⃗v ⃗κ 2 ) + ( ⃗v ⃗κ 1<br />
) 2 · (⃗κ 1 ⃗κ 1 ) + ( ⃗v ⃗κ )<br />
2<br />
) 2 · (⃗κ 2 ⃗κ 2 )<br />
⃗κ 1 ⃗κ 1 ⃗κ 2 ⃗κ 2 ⃗κ 1 ⃗κ 1 ⃗κ 2 ⃗κ 2<br />
(The two mixed terms in the third part of the third line are zero because ⃗κ 1 and ⃗κ 2 are orthogonal.)<br />
The result now follows on gathering like terms and on recognizing that ⃗κ 1 ⃗κ 1 = 1 and ⃗κ 2 ⃗κ 2 = 1<br />
because these vectors are given as members of an orthonormal set.<br />
Three.VI.2.19 It is true, except for the zero vector. Every vector in R n except the zero vector is in a<br />
basis, and that basis can be orthogonalized.<br />
Three.VI.2.20<br />
The 3×3 case gives the idea. The set<br />
⎛ ⎞ ⎛ ⎞ ⎛ ⎞<br />
{ ⎝ a d⎠ , ⎝ b e<br />
g h<br />
⎠ ,<br />
⎝ c f⎠}<br />
i<br />
is orthonormal if and only if these nine conditions all hold<br />
⎛ ⎞<br />
⎛ ⎞<br />
( a d<br />
) g ⎝ a d⎠ = 1<br />
( a d<br />
) g ⎝ b e⎠ = 0<br />
⎛g⎞<br />
⎛h⎞<br />
(<br />
b e<br />
)<br />
h ⎝ a d⎠ = 0<br />
(<br />
b e<br />
)<br />
h ⎝ b e⎠ = 1<br />
⎛<br />
g<br />
⎞<br />
⎛<br />
h<br />
⎞<br />
(<br />
c f<br />
)<br />
i ⎝ a d⎠ = 0<br />
(<br />
c f<br />
)<br />
i ⎝ b e⎠ = 0<br />
g<br />
h<br />
( a d g<br />
)<br />
(<br />
b e h<br />
)<br />
(<br />
c f i<br />
)<br />
⎛<br />
⎞<br />
⎝ c f⎠ = 0<br />
⎛ i⎞<br />
⎝ c f⎠ = 0<br />
⎛<br />
i<br />
⎞<br />
⎝ c f⎠ = 1<br />
i<br />
(the three conditions in the lower left are redundant but nonetheless correct). Those, in turn, hold if<br />
and only if<br />
⎛<br />
⎝ a d g<br />
⎞ ⎛<br />
b e h⎠<br />
⎝ a b c<br />
⎞ ⎛<br />
d e f⎠ = ⎝ 1 0 0<br />
⎞<br />
0 1 0⎠<br />
c f i g h i 0 0 1<br />
as required.<br />
This is an example, the inverse of this matrix is its transpose.<br />
⎛<br />
⎝ 1/√ 2 1/ √ ⎞<br />
2 0<br />
−1/ √ 2 1/ √ 2 0⎠<br />
0 0 1<br />
Three.VI.2.21 If the set is empty then the summation on the left side is the linear combination of the<br />
empty set of vectors, which by definition adds to the zero vector. In the second sentence, there is not<br />
such i, so the ‘if . . . then . . . ’ implication is vacuously true.<br />
Three.VI.2.22 (a) Part of the induction argument proving Theorem 2.7 checks that ⃗κ i is in the<br />
span of 〈 β ⃗ 1 , . . . , β ⃗ i 〉. (The i = 3 case in the proof illustrates.) Thus, in the change of basis matrix<br />
Rep K,B (id), the i-th column Rep B (⃗κ i ) has components i + 1 through k that are zero.<br />
(b) One way to see this is to recall the computational procedure that we use to find the inverse. We<br />
write the matrix, write the identity matrix next to it, and then we do Gauss-Jordan reduction. If the<br />
matrix starts out upper triangular then the Gauss-Jordan reduction involves only the Jordan half<br />
and these steps, when performed on the identity, will result in an upper triangular inverse matrix.<br />
Three.VI.2.23 For the inductive step, we assume that for all j in [1..i], these three conditions are true<br />
of each ⃗κ j : (i) each ⃗κ j is nonzero, (ii) each ⃗κ j is a linear combination of the vectors β ⃗ 1 , . . . , β ⃗ j , and