11.04.2014 Views

Linear Algebra Exercises-n-Answers.pdf

Linear Algebra Exercises-n-Answers.pdf

Linear Algebra Exercises-n-Answers.pdf

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

<strong>Answers</strong> to <strong>Exercises</strong> 51<br />

To see that no set with five or more vectors can be independent, set up<br />

⎛ ⎞ ⎛ ⎞ ⎛ ⎞ ⎛ ⎞ ⎛ ⎞ ⎛ ⎞<br />

a 1,1 a 1,2 a 1,3 a 1,4 a 1,5 0<br />

c 1<br />

⎜a 2,1<br />

⎟<br />

⎝a 3,1<br />

⎠ + c ⎜a 2,2<br />

⎟<br />

2 ⎝a 3,2<br />

⎠ + c ⎜a 2,3<br />

⎟<br />

3 ⎝a 3,3<br />

⎠ + c ⎜a 2,4<br />

⎟<br />

4 ⎝a 3,4<br />

⎠ + c ⎜a 2,5<br />

⎟<br />

5 ⎝a 3,5<br />

⎠ = ⎜0<br />

⎟<br />

⎝0⎠<br />

a 4,1 a 4,2 a 4,3 a 4,4 a 4,5 0<br />

and note that the resulting linear system<br />

a 1,1 c 1 + a 1,2 c 2 + a 1,3 c 3 + a 1,4 c 4 + a 1,5 c 5 = 0<br />

a 2,1 c 1 + a 2,2 c 2 + a 2,3 c 3 + a 2,4 c 4 + a 2,5 c 5 = 0<br />

a 3,1 c 1 + a 3,2 c 2 + a 3,3 c 3 + a 3,4 c 4 + a 3,5 c 5 = 0<br />

a 4,1 c 1 + a 4,2 c 2 + a 4,3 c 3 + a 4,4 c 4 + a 4,5 c 5 = 0<br />

has four equations and five unknowns, so Gauss’ method must end with at least one c variable free,<br />

so there are infinitely many solutions, and so the above linear relationship among the four-tall vectors<br />

has more solutions than just the trivial solution.<br />

The smallest linearly independent set is the empty set.<br />

The biggest linearly dependent set is R 4 . The smallest is {⃗0}.<br />

Two.II.1.36 (a) The intersection of two linearly independent sets S ∩T must be linearly independent<br />

as it is a subset of the linearly independent set S (as well as the linearly independent set T also, of<br />

course).<br />

(b) The complement of a linearly independent set is linearly dependent as it contains the zero vector.<br />

(c) We must produce an example. One, in R 2 , is<br />

( (<br />

1 2<br />

S = { } and T = { }<br />

0)<br />

0)<br />

since the linear dependence of S 1 ∪ S 2 is easily seen.<br />

(d) The union of two linearly independent sets S ∪ T is linearly independent if and only if their<br />

spans have a trivial intersection [S] ∩ [T ] = {⃗0}. To prove that, assume that S and T are linearly<br />

independent subsets of some vector space.<br />

For the ‘only if’ direction, assume that the intersection of the spans is trivial [S] ∩ [T ] = {⃗0}.<br />

Consider the set S ∪ T . Any linear relationship c 1 ⃗s 1 + · · · + c n ⃗s n + d 1<br />

⃗t 1 + · · · + d m<br />

⃗t m = ⃗0 gives<br />

c 1 ⃗s 1 + · · · + c n ⃗s n = −d 1<br />

⃗t 1 − · · · − d m<br />

⃗t m . The left side of that equation sums to a vector in [S], and<br />

the right side is a vector in [T ]. Therefore, since the intersection of the spans is trivial, both sides<br />

equal the zero vector. Because S is linearly independent, all of the c’s are zero. Because T is linearly<br />

independent, all of the d’s are zero. Thus, the original linear relationship among members of S ∪ T<br />

only holds if all of the coefficients are zero. That shows that S ∪ T is linearly independent.<br />

For the ‘if’ half we can make the same argument in reverse. If the union S ∪ T is linearly<br />

independent, that is, if the only solution to c 1 ⃗s 1 + · · · + c n ⃗s n + d 1<br />

⃗t 1 + · · · + d m<br />

⃗t m = ⃗0 is the trivial<br />

solution c 1 = 0, . . . , d m = 0, then any vector ⃗v in the intersection of the spans ⃗v = c 1 ⃗s 1 +· · ·+c n ⃗s n =<br />

−d 1<br />

⃗t 1 − · · · = d m<br />

⃗t m must be the zero vector because each scalar is zero.<br />

Two.II.1.37 (a) We do induction on the number of vectors in the finite set S.<br />

The base case is that S has no elements. In this case S is linearly independent and there is<br />

nothing to check — a subset of S that has the same span as S is S itself.<br />

For the inductive step assume that the theorem is true for all sets of size n = 0, n = 1, . . . , n = k<br />

in order to prove that it holds when S has n = k+1 elements. If the k+1-element set S = {⃗s 0 , . . . , ⃗s k }<br />

is linearly independent then the theorem is trivial, so assume that it is dependent. By Corollary 1.17<br />

there is an ⃗s i that is a linear combination of other vectors in S. Define S 1 = S − {⃗s i } and note that<br />

S 1 has the same span as S by Lemma 1.1. The set S 1 has k elements and so the inductive hypothesis<br />

applies to give that it has a linearly independent subset with the same span. That subset of S 1 is<br />

the desired subset of S.<br />

(b) Here is a sketch of the argument. The induction argument details have been left out.<br />

If the finite set S is empty then there is nothing to prove. If S = {⃗0} then the empty subset will<br />

do.<br />

Otherwise, take some nonzero vector ⃗s 1 ∈ S and define S 1 = {⃗s 1 }. If [S 1 ] = [S] then this proof<br />

is finished by noting that S 1 is linearly independent.<br />

If not, then there is a nonzero vector ⃗s 2 ∈ S − [S 1 ] (if every ⃗s ∈ S is in [S 1 ] then [S 1 ] = [S]).<br />

Define S 2 = S 1 ∪ {⃗s 2 }. If [S 2 ] = [S] then this proof is finished by using Theorem 1.17 to show that<br />

S 2 is linearly independent.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!