Lecture Notes in Differential Equations - Bruce E. Shapiro
Lecture Notes in Differential Equations - Bruce E. Shapiro Lecture Notes in Differential Equations - Bruce E. Shapiro
314 LESSON 31. LINEAR SYSTEMS Let y 1 , ..., y n denote the column vectors of W. Then by (31.93) Equating columns, ( y ′ 1 · · · y ′ n) = W ′ (31.94) = AW (31.95) = A ( ) y 1 · · · y n (31.96) = ( ) Ay 1 · · · Ay n (31.97) y ′ i = Ay i , i = 1, . . . , n (31.98) hence each column of W is a solution of the differential equation. Furthermore, by property (3) of of the Matrix Exponential, W = e At is invertible. Since a matrix is invertible if and only if all of its column vectors are linearly independent, this means that the columns of W form a linearly independent set of solutions to the differential equation. To prove that they are a fundamental set of solutions, suppose that y(t) is a solution of the initial value problem with y(t 0 ) = y 0 . We must show that it is a linear combination of the columns of W. Since the matrix W is invertible, the numbers C 1 , C 2 , ..., C n , which are the components of the vector exist. But C = [W(t 0 )] −1 y 0 (31.99) Ψ = WC (31.100) ⎛ ⎞ = ( C 1 ) ⎜ y 1 · · · y n ⎝ ⎟ . ⎠ (31.101) C n = C 1 y 1 + · · · + C n y n (31.102) is a solution of the differential equation, and by (31.99), Ψ(t 0 ) = W (t 0 )C = y 0 , so that Ψ(t) also satisfies the initial value problem. By the uniqueness theorem, y and Ψ(t) must be identical. Hence every solution of y ′ = Ay is a linear combination of the column vectors of W, because any solution can be considered a solution of some initial value problem. Thus the column vectors form a fundamental set of solutions, and hence W is a fundamental matrix.
315 Theorem 31.8. (Abel’s Formula.) The Wronskian of y ′ = Ay, where A is a constant matrix, is W (t) = W (t 0 )e (t−t0)trace(A) (31.103) If A is a function of t, then ∫ t W (t) = W (t 0 ) exp [trace(A(s))]ds t 0 (31.104) Proof. Let W be a fundamental matrix of y ′ = Ay. Then by the formula for differentiation of a determinant, ∣ ∣ y 11 ′ y 21 ′ · · · y n1 ′ ∣∣∣∣∣∣∣∣ y 11 y 21 · · · y n1∣∣∣∣∣∣∣∣ W ′ y 12 y 22 y n2 y 12 ′ y 22 ′ y n2 ′ (t) = . + . .. . . .. ∣y 13 y 2n y nn ∣y 13 y 2n y nn y 11 y 21 · · · y n1 y 12 y 22 + · · · + . . .. ∣y 13 ′ y 2n ′ y nn ′ ∣ (31.105) But since W satisfies the differential equation, W ′ = AW, so that ⎛ ⎞ a 11 · · · a 1n W ′ ⎜ ⎟ = AW = ⎝ . . ⎠ ( ) y 1 · · · y n (31.106) a n1 · · · a nn ⎛ ⎞ a 1 · y 1 · · · a 1 · y n ⎜ ⎟ = ⎝ . . ⎠ (31.107) a n · y 1 · · · a n · y n where a i is the ith row vector of A, and the a i · y j represents the vector dot product between the i th row of A and the jth solution vector y j .
- Page 271 and 272: 263 has an analytic solution at t =
- Page 273 and 274: 265 By the triangle inequality, |(k
- Page 275 and 276: 267 Table 28.1: Table of Special Fu
- Page 277 and 278: 269 Thus y = a 0 ( 1 + 1 6 t3 + 1 +
- Page 279 and 280: 271 into (28.114) and collect terms
- Page 281 and 282: 273 Summary of Power series method.
- Page 283 and 284: Lesson 29 Regular Singularities The
- Page 285 and 286: 277 ∑ ∞ ∞∑ ∞∑ 0 = t 2 a
- Page 287 and 288: 279 Case 2: Two equal real roots. S
- Page 289 and 290: 281 Example 29.6. Solve t 2 y ′
- Page 291 and 292: Lesson 30 The Method of Frobenius I
- Page 293 and 294: 285 This is a homogeneous linear eq
- Page 295 and 296: 287 Example 30.4. Find a Frobenius
- Page 297 and 298: 289 Thus a Frobenius solution is y
- Page 299 and 300: 291 Example 30.6. Find the form of
- Page 301 and 302: 293 term by term to (30.97). Starti
- Page 303 and 304: 295 Let j = n − k. Then |n − 1
- Page 305 and 306: 297 is a solution of (t − t 0 ) 2
- Page 307 and 308: 299 Evaluation of the integral depe
- Page 309 and 310: 301 Example 30.8. In example 30.4 w
- Page 311 and 312: Lesson 31 Linear Systems The genera
- Page 313 and 314: 305 is where λ 2 − T λ + ∆ =
- Page 315 and 316: 307 (b) If λ 1 ≠ λ 2 ∈ R, i.e
- Page 317 and 318: 309 We will verify (31.54) by induc
- Page 319 and 320: 311 The Jordan Form Let A be a squa
- Page 321: 313 y = 1 [( ) ( ) ] ( ) 4 1 e 2t 1
- Page 325 and 326: 317 By a similar argument, the seco
- Page 327 and 328: 319 we can replace (31.118) with a
- Page 329 and 330: 321 Corollary 31.12. The generalize
- Page 331 and 332: 323 where (A − λI)w 2 = w 1 , i.
- Page 333 and 334: 325 so that λ = 3, −5. The eigen
- Page 335 and 336: 327 Non-constant Coefficients We ca
- Page 337 and 338: 329 we find that ∫ M(t)g(t)dt = (
- Page 339 and 340: Lesson 32 The Laplace Transform Bas
- Page 341 and 342: 333 Figure 32.1: A piecewise contin
- Page 343 and 344: 335 Example 32.4. From integral A.1
- Page 345 and 346: 337 apply this result iteratively.
- Page 347 and 348: L [ t x−1] [ ] 1 d = L x dt tx =
- Page 349 and 350: 341 Equating numerators and expandi
- Page 351 and 352: 343 Derivatives of the Laplace Tran
- Page 353 and 354: 345 can be written as as illustrate
- Page 355 and 356: 347 Translations in the Laplace Var
- Page 357 and 358: 349 Summary of Translation Formulas
- Page 359 and 360: 351 The inverse transform is [ ] f(
- Page 361 and 362: 353 Example 32.18. Find the Laplace
- Page 363 and 364: 355 Similarly, we can express a uni
- Page 365 and 366: 357 Figure 32.7: Solution of exampl
- Page 367 and 368: Lesson 33 Numerical Methods Euler
- Page 369 and 370: 361 Figure 33.1: Illustration of Eu
- Page 371 and 372: 363 y 4 = y 3 + hf(t 3 , y 3 ) (33.
315<br />
Theorem 31.8. (Abel’s Formula.) The Wronskian of y ′ = Ay, where A<br />
is a constant matrix, is<br />
W (t) = W (t 0 )e (t−t0)trace(A) (31.103)<br />
If A is a function of t, then<br />
∫ t<br />
W (t) = W (t 0 ) exp [trace(A(s))]ds<br />
t 0<br />
(31.104)<br />
Proof. Let W be a fundamental matrix of y ′ = Ay. Then by the formula<br />
for differentiation of a determ<strong>in</strong>ant,<br />
∣ ∣ y 11 ′ y 21 ′ · · · y n1<br />
′ ∣∣∣∣∣∣∣∣ y 11 y 21 · · · y n1∣∣∣∣∣∣∣∣<br />
W ′ y 12 y 22 y n2<br />
y 12 ′ y 22 ′ y n2<br />
′<br />
(t) =<br />
. +<br />
. ..<br />
. . ..<br />
∣y 13 y 2n y nn<br />
∣y 13 y 2n y nn y 11 y 21 · · · y n1<br />
y 12 y 22<br />
+ · · · +<br />
. . ..<br />
∣y 13 ′ y 2n ′ y nn<br />
′ ∣<br />
(31.105)<br />
But s<strong>in</strong>ce W satisfies the differential equation, W ′ = AW, so that<br />
⎛<br />
⎞<br />
a 11 · · · a 1n<br />
W ′ ⎜<br />
⎟<br />
= AW = ⎝ . . ⎠ ( )<br />
y 1 · · · y n (31.106)<br />
a n1 · · · a nn<br />
⎛<br />
⎞<br />
a 1 · y 1 · · · a 1 · y n<br />
⎜<br />
⎟<br />
= ⎝ .<br />
. ⎠ (31.107)<br />
a n · y 1 · · · a n · y n<br />
where a i is the ith row vector of A, and the a i · y j represents the vector<br />
dot product between the i th row of A and the jth solution vector y j .