The Computable Differential Equation Lecture ... - Bruce E. Shapiro
The Computable Differential Equation Lecture ... - Bruce E. Shapiro The Computable Differential Equation Lecture ... - Bruce E. Shapiro
24 CHAPTER 1. CLASSIFYING THE PROBLEM Math 582B, Spring 2007 California State University Northridge c○2007, B.E.Shapiro Last revised: May 23, 2007
Chapter 2 Successive Approximations 2.1 Picard Iteration The Method of Successive Approximations or Picard Iteration y ′ = f(t, y) (2.1) y(t 0 ) = y0 (2.2) through a sequence of recursive iterations. Any function y = φ(t) that satisfies 2.1 must also solve the integral equation The method is summarized below in Algorithm 2.1. ✬ ∫ t φ(t) = y 0 + f(s, φ(s))ds t 0 (2.3) Algorithm 2.1. Picard Iteration To solve the initial value problem y ′ = f(t, y), y(t 0 ) = y 0 for the function y(t) 1. input: f(t, y), t 0 , y 0 , n max 2. let φ 0 = y 0 3. For i = 1, 2, . . . , n max ✩ ∫ t let φ i+1 (t) = y 0 + f(s, φ i (s))ds t 0 4. output: φ i (t) ✫ ✪ 25
- Page 1 and 2: The Computable Differential Equatio
- Page 3 and 4: Contents 1 Classifying The Problem
- Page 5 and 6: CONTENTS v Timeline on Computable D
- Page 7 and 8: Chapter 1 Classifying The Problem 1
- Page 9 and 10: CHAPTER 1. CLASSIFYING THE PROBLEM
- Page 11 and 12: CHAPTER 1. CLASSIFYING THE PROBLEM
- Page 13 and 14: CHAPTER 1. CLASSIFYING THE PROBLEM
- Page 15 and 16: CHAPTER 1. CLASSIFYING THE PROBLEM
- Page 17 and 18: CHAPTER 1. CLASSIFYING THE PROBLEM
- Page 19 and 20: CHAPTER 1. CLASSIFYING THE PROBLEM
- Page 21 and 22: CHAPTER 1. CLASSIFYING THE PROBLEM
- Page 23 and 24: CHAPTER 1. CLASSIFYING THE PROBLEM
- Page 25 and 26: CHAPTER 1. CLASSIFYING THE PROBLEM
- Page 27 and 28: CHAPTER 1. CLASSIFYING THE PROBLEM
- Page 29: CHAPTER 1. CLASSIFYING THE PROBLEM
- Page 33 and 34: CHAPTER 2. SUCCESSIVE APPROXIMATION
- Page 35 and 36: CHAPTER 2. SUCCESSIVE APPROXIMATION
- Page 37 and 38: CHAPTER 2. SUCCESSIVE APPROXIMATION
- Page 39 and 40: CHAPTER 2. SUCCESSIVE APPROXIMATION
- Page 41 and 42: CHAPTER 2. SUCCESSIVE APPROXIMATION
- Page 43 and 44: CHAPTER 2. SUCCESSIVE APPROXIMATION
- Page 45 and 46: CHAPTER 2. SUCCESSIVE APPROXIMATION
- Page 47 and 48: CHAPTER 2. SUCCESSIVE APPROXIMATION
- Page 49 and 50: Chapter 3 Approximate Solutions 3.1
- Page 51 and 52: CHAPTER 3. APPROXIMATE SOLUTIONS 45
- Page 53 and 54: CHAPTER 3. APPROXIMATE SOLUTIONS 47
- Page 55 and 56: CHAPTER 3. APPROXIMATE SOLUTIONS 49
- Page 57 and 58: CHAPTER 3. APPROXIMATE SOLUTIONS 51
- Page 59 and 60: CHAPTER 3. APPROXIMATE SOLUTIONS 53
- Page 61 and 62: CHAPTER 3. APPROXIMATE SOLUTIONS 55
- Page 63 and 64: CHAPTER 3. APPROXIMATE SOLUTIONS 57
- Page 65 and 66: CHAPTER 3. APPROXIMATE SOLUTIONS 59
- Page 67 and 68: CHAPTER 3. APPROXIMATE SOLUTIONS 61
- Page 69 and 70: CHAPTER 3. APPROXIMATE SOLUTIONS 63
- Page 71 and 72: CHAPTER 3. APPROXIMATE SOLUTIONS 65
- Page 73 and 74: CHAPTER 3. APPROXIMATE SOLUTIONS 67
- Page 75 and 76: Chapter 4 Improving on Euler’s Me
- Page 77 and 78: CHAPTER 4. IMPROVING ON EULER’S M
- Page 79 and 80: CHAPTER 4. IMPROVING ON EULER’S M
Chapter 2<br />
Successive Approximations<br />
2.1 Picard Iteration<br />
<strong>The</strong> Method of Successive Approximations or Picard Iteration<br />
y ′ = f(t, y) (2.1)<br />
y(t 0 ) = y0 (2.2)<br />
through a sequence of recursive iterations. Any function y = φ(t) that satisfies 2.1<br />
must also solve the integral equation<br />
<strong>The</strong> method is summarized below in Algorithm 2.1.<br />
✬<br />
∫ t<br />
φ(t) = y 0 + f(s, φ(s))ds<br />
t 0<br />
(2.3)<br />
Algorithm 2.1. Picard Iteration To solve the initial value problem<br />
y ′ = f(t, y), y(t 0 ) = y 0<br />
for the function y(t)<br />
1. input: f(t, y), t 0 , y 0 , n max<br />
2. let φ 0 = y 0<br />
3. For i = 1, 2, . . . , n max<br />
✩<br />
∫ t<br />
let φ i+1 (t) = y 0 + f(s, φ i (s))ds<br />
t 0<br />
4. output: φ i (t)<br />
✫<br />
✪<br />
25