The Computable Differential Equation Lecture ... - Bruce E. Shapiro
The Computable Differential Equation Lecture ... - Bruce E. Shapiro The Computable Differential Equation Lecture ... - Bruce E. Shapiro
74 CHAPTER 4. IMPROVING ON EULER’S METHOD Definition 4.5. [ Local Truncation Error] Let u(t n ) be any function defined on a mesh, and define the difference operator N by N u(t n ) = u(t n) − u(t n−1 ) h n − φ(t n−1 , u(t n−1 )) (4.45) The Local Truncation Error d n is given by N y(t n ), where y(t n ) is the exact solution evaluated on the mesh: d n = N y(t n ) = y(t n) − y(t n−1 ) h n − φ(t n−1 , y(t n−1 )) (4.46) The local truncation error gives an estimate of the error in discretizing the differential equation at y n assuming that there are no errors at y n1 - the local error in the calculation of the derivative. For Euler’s Method, y n+1 = y n + h n f(t n , y n ) (4.47) we can derive the local truncation error using a Taylor Series approximation about y(t n−1 ), d n = N (t n ) = y(t n) − y(t n−1 ) h = y(t n−1) + hy ′ (t n−1 ) + 1 2 h2 y ′′ (t n−1 ) + · · · − y(t n−1 ) h Using the fact that y ′ (t n−1 ) = f(t n−1 , y(t n−1 ), − f(t n−1 , y(t n−1 )) (4.48) − f(t n−1 , y(t n−1 ) (4.49) d n = h 2 y′′ (t n−1 ) + O(h 2 ) (4.50) Thus the local truncation error for Euler’s method is proportional to h. We write this as d n = O(h), and say that Euler’s method is a first order method. Definition 4.6 (Convergence). A one step method is said to converge on an interval [a, b] if y n → y on [a, b] as n → ∞ for any IVP y ′ = f(t, y), y(t 0 ) = y 0 (4.51) with f(t, y) Lipshitz in y. A method is said to be convergent of order p if for some positive integer p the global error e n = |y(t n ) − y n | satisfies e n = |y(t n ) − y n | = O(h p ) (4.52) Definition 4.7 (Stabilty, Zero-Stability, 0-Stability). A one-step method is said to be stable if for each IVP (4.44) with f Lipshitz in y there exists K, h 0 > 0 such that the difference between two different mesh functions (not necessarily solutions, just functions that are defined on the mesh) y n and ŷ n satisfies [ ] |y n − ŷ n | ≤ K |y 0 − ŷ 0 | + ‖N y n − N ŷ n ‖ (4.53) for all h ∈ [0, h 0 ], where ‖ · ‖ denotes the sup-norm, ‖u n ‖ = max 1≤j≤n |u j| (4.54) Math 582B, Spring 2007 California State University Northridge c○2007, B.E.Shapiro Last revised: May 23, 2007
CHAPTER 4. IMPROVING ON EULER’S METHOD 75 Theorem 4.2. Euler’s method is zero-stable. Proof. ‖N u n − N u ∗ n‖ = ∥ u n − u ∗ n − u n−1 − u ∗ n−1 − ( f(u n−1 ) − f(u ∗ h h n−1) )∥ ∥ (4.55) ≥ ∥ u n − u ∗ n ∥ − ∥ u n−1 − u ∗ n−1 + ( f(u n−1 ) − f(u ∗ h h n−1) )∥ ∥ (4.56) By the triangle inequality and the Lipshitz condition ∥ u n − u ∗ n ∥ ≤ ‖N u n − N u ∗ h n‖ + ∥ u n−1 − u ∗ n−1 ∥ + K ∥ ∥u n−1 − u ∗ ∥ n−1 (4.57) ∥ h ∥u n − u ∗ ∥ n ≤ h‖N u n − N u ∗ n‖ + ∥ ∥u n−1 − u ∗ ∥ n−1 + hK ∥ ∥u n−1 − u ∗ ∥ n−1 (4.58) = h‖N u n − N u ∗ n‖ + (1 + hK) ∥ ∥u n−1 − u ∗ ∥ n−1 (4.59) Apply the result recursively n − 1 additional times to give ∥ ∥u n − u ∗ ∥ n ≤ h‖N u n − N u ∗ n‖ + (1 + hK) ∥ ∥u n−1 − u ∗ ∥ n−1 (4.60) ≤ h‖N u n − N u ∗ n‖+ (4.61) (1 + hk) [ h‖N u n − N u ∗ n‖ + (1 + hk) ∥ ∥u n−2 − u ∗ ∥ ] n−2 (4.62) ≤ . ≤ h‖N u n − N u ∗ n‖ n∑ (1 + hK) n−i + (1 + hk) n∥ ∥ u0 − u ∗ ∥ 0 (4.63) i=1 ∥∥ ≤ k{ u 0 − u ∗ } ∥ 0 + ‖N u n − N u ∗ n‖ (4.64) where { k = max h n∑ (1 + hK) n−i , (1 + hk) n} (4.65) i=1 Therefore the method is 0-stable. A zero-stable method depends continuously on the initial data. If a method is not zero stable that a small perturbation in the method could potentially lead to an infinite change in the results. Suppose that y n = y n + 1 + hφ(t n − 1, y n−1 , ...) (4.66) is a numerical method. Then we define a perturbation δ of the method as ỹ n = y n + 1 + hφ(t n − 1, y n−1 , ...) + δ n (4.67) The following theorem is normally taken as the definition of zero-stability. c○2007, B.E.Shapiro Last revised: May 23, 2007 Math 582B, Spring 2007 California State University Northridge
- Page 29 and 30: CHAPTER 1. CLASSIFYING THE PROBLEM
- Page 31 and 32: Chapter 2 Successive Approximations
- Page 33 and 34: CHAPTER 2. SUCCESSIVE APPROXIMATION
- Page 35 and 36: CHAPTER 2. SUCCESSIVE APPROXIMATION
- Page 37 and 38: CHAPTER 2. SUCCESSIVE APPROXIMATION
- Page 39 and 40: CHAPTER 2. SUCCESSIVE APPROXIMATION
- Page 41 and 42: CHAPTER 2. SUCCESSIVE APPROXIMATION
- Page 43 and 44: CHAPTER 2. SUCCESSIVE APPROXIMATION
- Page 45 and 46: CHAPTER 2. SUCCESSIVE APPROXIMATION
- Page 47 and 48: CHAPTER 2. SUCCESSIVE APPROXIMATION
- Page 49 and 50: Chapter 3 Approximate Solutions 3.1
- Page 51 and 52: CHAPTER 3. APPROXIMATE SOLUTIONS 45
- Page 53 and 54: CHAPTER 3. APPROXIMATE SOLUTIONS 47
- Page 55 and 56: CHAPTER 3. APPROXIMATE SOLUTIONS 49
- Page 57 and 58: CHAPTER 3. APPROXIMATE SOLUTIONS 51
- Page 59 and 60: CHAPTER 3. APPROXIMATE SOLUTIONS 53
- Page 61 and 62: CHAPTER 3. APPROXIMATE SOLUTIONS 55
- Page 63 and 64: CHAPTER 3. APPROXIMATE SOLUTIONS 57
- Page 65 and 66: CHAPTER 3. APPROXIMATE SOLUTIONS 59
- Page 67 and 68: CHAPTER 3. APPROXIMATE SOLUTIONS 61
- Page 69 and 70: CHAPTER 3. APPROXIMATE SOLUTIONS 63
- Page 71 and 72: CHAPTER 3. APPROXIMATE SOLUTIONS 65
- Page 73 and 74: CHAPTER 3. APPROXIMATE SOLUTIONS 67
- Page 75 and 76: Chapter 4 Improving on Euler’s Me
- Page 77 and 78: CHAPTER 4. IMPROVING ON EULER’S M
- Page 79: CHAPTER 4. IMPROVING ON EULER’S M
- Page 83 and 84: CHAPTER 4. IMPROVING ON EULER’S M
- Page 85 and 86: CHAPTER 4. IMPROVING ON EULER’S M
- Page 87 and 88: CHAPTER 4. IMPROVING ON EULER’S M
- Page 89 and 90: CHAPTER 4. IMPROVING ON EULER’S M
- Page 91 and 92: CHAPTER 4. IMPROVING ON EULER’S M
- Page 93 and 94: CHAPTER 4. IMPROVING ON EULER’S M
- Page 95 and 96: Chapter 5 Runge-Kutta Methods 5.1 T
- Page 97 and 98: CHAPTER 5. RUNGE-KUTTA METHODS 91 w
- Page 99 and 100: CHAPTER 5. RUNGE-KUTTA METHODS 93 w
- Page 101 and 102: CHAPTER 5. RUNGE-KUTTA METHODS 95 5
- Page 103 and 104: CHAPTER 5. RUNGE-KUTTA METHODS 97 F
- Page 105 and 106: CHAPTER 5. RUNGE-KUTTA METHODS 99 T
- Page 107 and 108: CHAPTER 5. RUNGE-KUTTA METHODS 101
- Page 109 and 110: CHAPTER 5. RUNGE-KUTTA METHODS 103
- Page 111 and 112: CHAPTER 5. RUNGE-KUTTA METHODS 105
- Page 113 and 114: CHAPTER 5. RUNGE-KUTTA METHODS 107
- Page 115 and 116: CHAPTER 5. RUNGE-KUTTA METHODS 109
- Page 117 and 118: CHAPTER 5. RUNGE-KUTTA METHODS 111
- Page 119 and 120: CHAPTER 5. RUNGE-KUTTA METHODS 113
- Page 121 and 122: CHAPTER 5. RUNGE-KUTTA METHODS 115
- Page 123 and 124: Chapter 6 Linear Multistep Methods
- Page 125 and 126: CHAPTER 6. LINEAR MULTISTEP METHODS
- Page 127 and 128: CHAPTER 6. LINEAR MULTISTEP METHODS
- Page 129 and 130: CHAPTER 6. LINEAR MULTISTEP METHODS
74 CHAPTER 4. IMPROVING ON EULER’S METHOD<br />
Definition 4.5. [ Local Truncation Error] Let u(t n ) be any function defined on<br />
a mesh, and define the difference operator N by<br />
N u(t n ) = u(t n) − u(t n−1 )<br />
h n<br />
− φ(t n−1 , u(t n−1 )) (4.45)<br />
<strong>The</strong> Local Truncation Error d n is given by N y(t n ), where y(t n ) is the exact<br />
solution evaluated on the mesh:<br />
d n = N y(t n ) = y(t n) − y(t n−1 )<br />
h n<br />
− φ(t n−1 , y(t n−1 )) (4.46)<br />
<strong>The</strong> local truncation error gives an estimate of the error in discretizing the<br />
differential equation at y n assuming that there are no errors at y n1 - the local error<br />
in the calculation of the derivative. For Euler’s Method,<br />
y n+1 = y n + h n f(t n , y n ) (4.47)<br />
we can derive the local truncation error using a Taylor Series approximation about<br />
y(t n−1 ),<br />
d n = N (t n ) = y(t n) − y(t n−1 )<br />
h<br />
= y(t n−1) + hy ′ (t n−1 ) + 1 2 h2 y ′′ (t n−1 ) + · · · − y(t n−1 )<br />
h<br />
Using the fact that y ′ (t n−1 ) = f(t n−1 , y(t n−1 ),<br />
− f(t n−1 , y(t n−1 )) (4.48)<br />
− f(t n−1 , y(t n−1 ) (4.49)<br />
d n = h 2 y′′ (t n−1 ) + O(h 2 ) (4.50)<br />
Thus the local truncation error for Euler’s method is proportional to h. We write<br />
this as d n = O(h), and say that Euler’s method is a first order method.<br />
Definition 4.6 (Convergence). A one step method is said to converge on an<br />
interval [a, b] if y n → y on [a, b] as n → ∞ for any IVP<br />
y ′ = f(t, y), y(t 0 ) = y 0 (4.51)<br />
with f(t, y) Lipshitz in y. A method is said to be convergent of order p if for<br />
some positive integer p the global error e n = |y(t n ) − y n | satisfies<br />
e n = |y(t n ) − y n | = O(h p ) (4.52)<br />
Definition 4.7 (Stabilty, Zero-Stability, 0-Stability). A one-step method is<br />
said to be stable if for each IVP (4.44) with f Lipshitz in y there exists K, h 0 > 0<br />
such that the difference between two different mesh functions (not necessarily solutions,<br />
just functions that are defined on the mesh) y n and ŷ n satisfies<br />
[<br />
]<br />
|y n − ŷ n | ≤ K |y 0 − ŷ 0 | + ‖N y n − N ŷ n ‖<br />
(4.53)<br />
for all h ∈ [0, h 0 ], where ‖ · ‖ denotes the sup-norm,<br />
‖u n ‖ = max<br />
1≤j≤n |u j| (4.54)<br />
Math 582B, Spring 2007<br />
California State University Northridge<br />
c○2007, B.E.<strong>Shapiro</strong><br />
Last revised: May 23, 2007