Partial Differential Equations - Modelling and ... - ResearchGate
Partial Differential Equations - Modelling and ... - ResearchGate Partial Differential Equations - Modelling and ... - ResearchGate
86 J.Ch. Gilbert and P. Joly ⎛ ⎞ [ψ m (R(τ))] ′ (τ 1 ) ⎜ F (τ) = ⎝ ⎟ . ⎠ . [ψ m (R(τ))] ′ (τ k ) Obviously, there holds F (τ) =0ifτ is the vector of the alternate tangent points of the optimal polynomial. We propose to determine the root(s) τ of F by Newton’s method (see [Deu04, BGLS06], for instance). The procedure could have been improved by using a version of Newton’s method that exploits inequalities (see, for example, [Kan01, BM05] and the references thereof) to impose τ 1 >τ 2 > ···>τ k as well as the curvature of the solution polynomial at the tangent points: [ψ m (R(τ))] ′′ (τ j )(2 − v j ) ≥ 0, for 1 ≤ j ≤ k. Wehave not adopted this additional sophistication, however. The Newton method requires the computation of F ′ (τ). If we denote by r l (τ), 1 ≤ l ≤ k, the coefficients of R(τ), by δ ij the Kronecker symbol, and by V k (τ) the Vandermonde matrix of order k, there holds ∂F i ∂τ j (τ) =δ ij [ψ m (R(τ))] ′′ (τ i )+ k∑ l=1 ∂r l (τ)(m + l)τ m+l−1 i ∂τ j = δ ij [ψ m (R(τ))] ′′ (τ i ) + [Diag(τ1 m ,...,τk m )V k (τ) Diag((m +1),...,(m + k))r ′ (τ)] ij . To get an expression of r ′ (τ), let us differentiate with respect to τ j the identity [ψ m (R(τ))](τ i )=v i .Itresults δ ij [ψ m (R(τ))] ′ (τ i )+ ( τ m+1 i ) ··· τ m+k ∂r i (τ) =0. ∂τ j Denoting by M(τ) the coefficient matrix of the linear system (30), we get Therefore, r ′ (τ) =−M(τ) −1 Diag ([ψ m (R(τ))] ′ (τ 1 ),...,[ψ m (R(τ))] ′ (τ k )) = −M(τ) −1 Diag(F (τ)). F ′ (τ) = Diag ([ψ m (R(τ))] ′′ (τ 1 ), ..., [ψ m (R(τ))] ′′ (τ k )) − Diag(τ1 m , ..., τk m )V k (τ) Diag((m+1), ..., (m+k))M(τ) −1 Diag(F (τ)). Observe that at a solution τ ∗ the second term above vanishes, so that F ′ (τ ∗ )is diagonal. It is also nonsingular if the second derivatives [ψ m (R(τ ∗ ))] ′′ (τ ∗ j )are nonzero. Around such a solution, Newton’s method is, therefore, well defined. In the numerical results presented below, we have used the solver of nonlinear equations fsolve of Matlab (version 7.2), which does not take into account the inequality constraints. The vector v has been determined by adopting the following heuristics. We have assumed that the optimal polynomial is negative for all x
Optimal Higher Order Time Discretizations 87 coefficient of x m+k of the optimal polynomial, has the sign (−1) m+k+1 ;ifthe assumption is correct, the optimal polynomial should get out of the interval at y =0ifm+k is even and at y =4ifm+k is odd; according to Theorem 4, one should, therefore, take v 1 =4− ε v if m + k is even and v 1 = ε v if m + k is odd. The value of ε v is taken nonnegative and as close as possible to 0. A positive value of ε v is usually necessary for counterbalancing rounding errors. The other values of v i alternate in {ε v , 4 − ε v }. The initial point τ is chosen by trials and errors, or according to suggestions made in the discussion below. The proposed approach has the following advantages (+) and disadvantages (−): + The problem has few variables (just k). + The problem looks well conditioned, provided the second derivatives at the tangent points are reasonable, which seems to be the case. − There is no guarantee that the solution found is the optimal one since a zero of F will not be a solution to the original problem if the polynomial gets out of [0, 4] at a point τ 0 less than τ 1 . An example of this situation is given in Figure 3. However, if τ 0 >τ 1 and if [ψ m (R)](τ 0 )+[ψ m (R)](τ 1 )=4, the sufficient optimality conditions of Theorem 5 guarantee that R is the solution. − The solution polynomial may get out of the interval [0, 4] near a tangent point due to the lack of precision of the solution, which has motivated the use of the small ε v > 0. − Obtaining the convergence to a zero of F (not only a stationary point τ ∗ of ‖F ‖ 2 2, hence verifying F ′ (τ ∗ ) T F (τ ∗ ) = 0) depends on the initialization of the iterative process. 4.5 4 3.5 3 2.5 2 1.5 1 0.5 0 −0.5 0 5 10 15 20 25 30 35 40 45 50 Fig. 3. AzeroofF that is not an optimal polynomial (m =3,k = 1).
- Page 43 and 44: Mixed FE Methods on Polyhedral Mesh
- Page 45 and 46: Mixed FE Methods on Polyhedral Mesh
- Page 47 and 48: 4 Hybridization and Condensation Mi
- Page 49 and 50: Mixed FE Methods on Polyhedral Mesh
- Page 51 and 52: is symmetric and positive definite,
- Page 53 and 54: with some coefficient α ∈ R wher
- Page 55 and 56: 44 E.J. Dean and R. Glowinski so fa
- Page 57 and 58: 46 E.J. Dean and R. Glowinski 2 A L
- Page 59 and 60: 48 E.J. Dean and R. Glowinski S:T=
- Page 61 and 62: 50 E.J. Dean and R. Glowinski minim
- Page 63 and 64: 52 E.J. Dean and R. Glowinski 6 On
- Page 65 and 66: 54 E.J. Dean and R. Glowinski Fig.
- Page 67 and 68: 56 E.J. Dean and R. Glowinski and
- Page 69 and 70: 58 E.J. Dean and R. Glowinski 7 Num
- Page 71 and 72: 60 E.J. Dean and R. Glowinski Fig.
- Page 73 and 74: 62 E.J. Dean and R. Glowinski Assum
- Page 75 and 76: Higher Order Time Stepping for Seco
- Page 77 and 78: u n+1 h Optimal Higher Order Time D
- Page 79 and 80: Optimal Higher Order Time Discretiz
- Page 81 and 82: Optimal Higher Order Time Discretiz
- Page 83 and 84: Optimal Higher Order Time Discretiz
- Page 85 and 86: Optimal Higher Order Time Discretiz
- Page 87 and 88: Optimal Higher Order Time Discretiz
- Page 89 and 90: Optimal Higher Order Time Discretiz
- Page 91 and 92: Optimal Higher Order Time Discretiz
- Page 93: Optimal Higher Order Time Discretiz
- Page 97 and 98: Optimal Higher Order Time Discretiz
- Page 99 and 100: Optimal Higher Order Time Discretiz
- Page 101 and 102: Optimal Higher Order Time Discretiz
- Page 103 and 104: 96 I. Sazonov et al. To provide a p
- Page 105 and 106: 98 I. Sazonov et al. In the first s
- Page 107 and 108: 100 I. Sazonov et al. Fig. 1. An ex
- Page 109 and 110: 102 I. Sazonov et al. H z 1 exact F
- Page 111 and 112: 104 I. Sazonov et al. (a) (b) Fig.
- Page 113 and 114: 106 I. Sazonov et al. Scattering Wi
- Page 115 and 116: 108 I. Sazonov et al. 6.4 Scatterin
- Page 117 and 118: 110 I. Sazonov et al. (a) (b) Fig.
- Page 119 and 120: 112 I. Sazonov et al. [MHP96] K. Mo
- Page 121 and 122: 114 R. Sanders and A.M. Tesdall I R
- Page 123 and 124: 116 R. Sanders and A.M. Tesdall imp
- Page 125 and 126: 118 R. Sanders and A.M. Tesdall alo
- Page 127 and 128: 120 R. Sanders and A.M. Tesdall (a)
- Page 129 and 130: 122 R. Sanders and A.M. Tesdall D C
- Page 131 and 132: 124 R. Sanders and A.M. Tesdall 8.6
- Page 133 and 134: 126 R. Sanders and A.M. Tesdall 0.3
- Page 135 and 136: 128 R. Sanders and A.M. Tesdall [TR
- Page 137 and 138: 132 S. Lapin et al. Ω R γ Ω 2 Γ
- Page 139 and 140: 134 S. Lapin et al. ∫ ∂ 2 ∫
- Page 141 and 142: 136 S. Lapin et al. Ω R γ Fig. 3.
- Page 143 and 144: 138 S. Lapin et al. 4 Energy Inequa
86 J.Ch. Gilbert <strong>and</strong> P. Joly<br />
⎛<br />
⎞<br />
[ψ m (R(τ))] ′ (τ 1 )<br />
⎜<br />
F (τ) =<br />
⎝<br />
⎟<br />
. ⎠ .<br />
[ψ m (R(τ))] ′ (τ k )<br />
Obviously, there holds F (τ) =0ifτ is the vector of the alternate tangent<br />
points of the optimal polynomial. We propose to determine the root(s) τ of<br />
F by Newton’s method (see [Deu04, BGLS06], for instance). The procedure<br />
could have been improved by using a version of Newton’s method that exploits<br />
inequalities (see, for example, [Kan01, BM05] <strong>and</strong> the references thereof) to<br />
impose τ 1 >τ 2 > ···>τ k as well as the curvature of the solution polynomial<br />
at the tangent points: [ψ m (R(τ))] ′′ (τ j )(2 − v j ) ≥ 0, for 1 ≤ j ≤ k. Wehave<br />
not adopted this additional sophistication, however.<br />
The Newton method requires the computation of F ′ (τ). If we denote by<br />
r l (τ), 1 ≤ l ≤ k, the coefficients of R(τ), by δ ij the Kronecker symbol, <strong>and</strong> by<br />
V k (τ) the V<strong>and</strong>ermonde matrix of order k, there holds<br />
∂F i<br />
∂τ j<br />
(τ) =δ ij [ψ m (R(τ))] ′′ (τ i )+<br />
k∑<br />
l=1<br />
∂r l<br />
(τ)(m + l)τ m+l−1<br />
i<br />
∂τ j<br />
= δ ij [ψ m (R(τ))] ′′ (τ i )<br />
+ [Diag(τ1 m ,...,τk m )V k (τ) Diag((m +1),...,(m + k))r ′ (τ)] ij<br />
.<br />
To get an expression of r ′ (τ), let us differentiate with respect to τ j the identity<br />
[ψ m (R(τ))](τ i )=v i .Itresults<br />
δ ij [ψ m (R(τ))] ′ (τ i )+ ( τ m+1<br />
i<br />
)<br />
··· τ m+k ∂r<br />
i<br />
(τ) =0.<br />
∂τ j<br />
Denoting by M(τ) the coefficient matrix of the linear system (30), we get<br />
Therefore,<br />
r ′ (τ) =−M(τ) −1 Diag ([ψ m (R(τ))] ′ (τ 1 ),...,[ψ m (R(τ))] ′ (τ k ))<br />
= −M(τ) −1 Diag(F (τ)).<br />
F ′ (τ) = Diag ([ψ m (R(τ))] ′′ (τ 1 ), ..., [ψ m (R(τ))] ′′ (τ k ))<br />
− Diag(τ1 m , ..., τk m )V k (τ) Diag((m+1), ..., (m+k))M(τ) −1 Diag(F (τ)).<br />
Observe that at a solution τ ∗ the second term above vanishes, so that F ′ (τ ∗ )is<br />
diagonal. It is also nonsingular if the second derivatives [ψ m (R(τ ∗ ))] ′′ (τ ∗ j )are<br />
nonzero. Around such a solution, Newton’s method is, therefore, well defined.<br />
In the numerical results presented below, we have used the solver of nonlinear<br />
equations fsolve of Matlab (version 7.2), which does not take into account<br />
the inequality constraints. The vector v has been determined by adopting the<br />
following heuristics. We have assumed that the optimal polynomial is negative<br />
for all x