Partial Differential Equations - Modelling and ... - ResearchGate
Partial Differential Equations - Modelling and ... - ResearchGate Partial Differential Equations - Modelling and ... - ResearchGate
272 Y. Achdou Lemma 1. Under the assumptions of Proposition 2, and if (i) either α< 1 2 , (ii) or ψ is continuous near 0 and there exists a bounded function ω : R → R and two positive numbers ζ and C such that ψ(z)e 3 2 z −ψ(0)e − 3 2 z = zω(z), with |ω(z)| ≤C|z|e −ζ|z| , for all z ∈ R, then for any s ∈ R, the operator B − B T is continuous from V s to V s−1 . 4.3 The Least Square Problem and Its Penalized Version In order to properly define the least square problem, we have to define the set where (σ, α, ψ) may vary and the regularization functional. Let us introduce an Hilbert space H ψ endowed with the norm ‖·‖ Hψ , relatively compact in B. LetJ ψ be a convex, coercive and C 1 function defined on H ψ .ItiswellknownthatJ ψ is also weakly lower semicontinuous in H ψ . Consider H ψ a closed and convex subset of H ψ . We assume that H ψ is contained in { ψ : ‖ψ‖ B ≤ ¯ψ; ψ ≥ 0 } and that 1. the functions ψ ∈H ψ are continuous near 0, 2. there exists two positive constants ψ and ¯z such that ψ(z) ≥ ψ for all z such that |z| ≤¯z, 3. there exist two constants ζ > 0andC ≥ 0 such that for all ψ ∈H ψ , ψ(z)e 3 2 z − ψ(0)e − 3 2 z = zω(z), with |ω(z)| ≤C|z|e −ζ|z| , for all z ∈ R. This assumption will allow us to use the results stated in Lemma 1. Finally, consider the set H =[σ, ¯σ] × [0, 1 − α] ×H ψ and define J R (σ, α, ψ) =|σ − σ ◦ | 2 + |α − α ◦ | 2 + J ψ (ψ), where σ ◦ and α ◦ are suitable prior parameters. Consider the least square problem: Minimize J(u)+J R (σ, α, ψ) ∣ (σ, α, ψ) ∈H, u = u(σ, α, ψ) satisfies (VIP). (41) We fix ¯X (independent of (σ, α, ψ) ∈H) as in Proposition 6, and assume that x i < ¯X, i ∈ I. Taking X ≥ ¯X, it is also possible to consider the least square inverse problem corresponding to the penalized problem Minimize J(u ε )+J R (σ, α, ψ) ∣ (σ, α, ψ) ∈H, u ε satisfies (37). (42) Propositions 6 and 7 are useful for proving the following: Proposition 8 (Approximation of the least square problem). Let (ε n ) n be a sequence of penalty parameters such that ε n → 0 as n →∞,andlet (σ ∗ ε n ,α ∗ ε n ,ψ ∗ ε n ),u ∗ ε n be a solution of the problem (42), with X fixed as above. Consider a subsequence such that (σ ∗ ε n ,α ∗ ε n ,ψ ∗ ε n ) converges to (σ ∗ ,α ∗ ,ψ ∗ ) in F, ψ ∗ ε n weakly converges to ψ ∗ in H ψ and u ∗ ε n → u ∗ weakly in L 2 (0,T; V X ),
Calibration of Lévy Processes with American Options 273 where V X is defined in (35). Then (σ ∗ ,α ∗ ,ψ ∗ ),u ∗ is a solution of (41), where we agree to use the notation u ∗ for the function E X (u ∗ ). We have that (i) u ∗ ε n converges to u ∗ uniformly in [0,T] × [0,X], andinL 2 (0,T; V X ); (ii) 1 {x>S} rxV εn (u ∗ ε n ) converges to µ ∗ strongly in L 2 ((0,T) × (0,X)); (iii) for all smooth function χ with compact support contained in [0,X), χu ∗ ε n converges to χu ∗ strongly in L 2 (0,T; V 2 ) and in L ∞ (0,T; V ). 4.4 The Optimality Conditions We fix X as above. Let a subsequence (σε ∗ n ,αε ∗ n ,ψε ∗ n ,u ∗ ε n ) of solutions of (42) converge to (σ ∗ ,α ∗ ,ψ ∗ ,u ∗ ) as in Proposition 8, then (σ ∗ ,α ∗ ,ψ ∗ ,u ∗ ) is a solution of (41). The optimality conditions will involve an adjoint problem. Since the cost functional involves point-wise values of u, the adjoint problem will have a singular data. In that context, the notion of very weak solution of boundary value problems will be relevant: for that, we introduce the spaces ˜Z and Z, ˜Z = { v ∈ L 2 (0,T; V X ); Z = {v ∈ ˜Z; v(t =0)=0}, } ∂v ∂t + A Xv ∈ L 2 ((0,T) × (0,X)) , (43) where A X is the operator given by (36), (29) and (13), with the parameters (σ ∗ ,α ∗ ,ψ ∗ ). These spaces endowed with the graph norm are Banach spaces. We also need to introduce some functionals before stating the optimality conditions. We assume that u ∗ (T i ,x i ) >u ◦ (x i ), for all i ∈ I. It is clear from the continuity of u ∗ and from the uniform convergence of u ∗ ε n that there exists a positive real number a and an integer N such that for n>N, u ∗ ε n (t, x) > u ◦ (x) +ε n for all (t, x) such that |t − T i |
- Page 217 and 218: 216 J. Hao et al. * * * * * * * * *
- Page 219 and 220: 218 J. Hao et al. and solve for V n
- Page 221 and 222: 220 J. Hao et al. Table 2. The calc
- Page 223 and 224: 222 J. Hao et al. References [ASS80
- Page 225 and 226: Computing the Eigenvalues of the La
- Page 227 and 228: Eigenvalues of the Laplace-Beltrami
- Page 229 and 230: Eigenvalues of the Laplace-Beltrami
- Page 231 and 232: Eigenvalues of the Laplace-Beltrami
- Page 233 and 234: A Fixed Domain Approach in Shape Op
- Page 235 and 236: Shape Optimization Problems with Ne
- Page 237 and 238: Shape Optimization Problems with Ne
- Page 239 and 240: Shape Optimization Problems with Ne
- Page 241 and 242: Shape Optimization Problems with Ne
- Page 243 and 244: Reduced-Order Modelling of Dispersi
- Page 245 and 246: Reduced-Order Modelling of Dispersi
- Page 247 and 248: Reduced-Order Modelling of Dispersi
- Page 249 and 250: Reduced-Order Modelling of Dispersi
- Page 251 and 252: Reduced-Order Modelling of Dispersi
- Page 253 and 254: Reduced-Order Modelling of Dispersi
- Page 255 and 256: Calibration of Lévy Processes with
- Page 257 and 258: Calibration of Lévy Processes with
- Page 259 and 260: Calibration of Lévy Processes with
- Page 261 and 262: Calibration of Lévy Processes with
- Page 263 and 264: We have proved Calibration of Lévy
- Page 265 and 266: Calibration of Lévy Processes with
- Page 267: Calibration of Lévy Processes with
- Page 271 and 272: Note that p ∗ satisfies Calibrati
- Page 273 and 274: Calibration of Lévy Processes with
- Page 275 and 276: 280 S. Ikonen and J. Toivanen the p
- Page 277 and 278: 282 S. Ikonen and J. Toivanen Merto
- Page 279 and 280: 284 S. Ikonen and J. Toivanen For H
- Page 281 and 282: 286 S. Ikonen and J. Toivanen and {
- Page 283 and 284: 288 S. Ikonen and J. Toivanen 1.6 1
- Page 285 and 286: 290 S. Ikonen and J. Toivanen 8 Con
- Page 287: 292 S. Ikonen and J. Toivanen [Mer7
Calibration of Lévy Processes with American Options 273<br />
where V X is defined in (35). Then (σ ∗ ,α ∗ ,ψ ∗ ),u ∗ is a solution of (41), where<br />
we agree to use the notation u ∗ for the function E X (u ∗ ). We have that<br />
(i) u ∗ ε n<br />
converges to u ∗ uniformly in [0,T] × [0,X], <strong>and</strong>inL 2 (0,T; V X );<br />
(ii) 1 {x>S} rxV εn (u ∗ ε n<br />
) converges to µ ∗ strongly in L 2 ((0,T) × (0,X));<br />
(iii) for all smooth function χ with compact support contained in [0,X), χu ∗ ε n<br />
converges to χu ∗ strongly in L 2 (0,T; V 2 ) <strong>and</strong> in L ∞ (0,T; V ).<br />
4.4 The Optimality Conditions<br />
We fix X as above. Let a subsequence (σε ∗ n<br />
,αε ∗ n<br />
,ψε ∗ n<br />
,u ∗ ε n<br />
) of solutions of (42)<br />
converge to (σ ∗ ,α ∗ ,ψ ∗ ,u ∗ ) as in Proposition 8, then (σ ∗ ,α ∗ ,ψ ∗ ,u ∗ ) is a solution<br />
of (41).<br />
The optimality conditions will involve an adjoint problem. Since the cost<br />
functional involves point-wise values of u, the adjoint problem will have a<br />
singular data. In that context, the notion of very weak solution of boundary<br />
value problems will be relevant: for that, we introduce the spaces ˜Z <strong>and</strong> Z,<br />
˜Z =<br />
{<br />
v ∈ L 2 (0,T; V X );<br />
Z = {v ∈ ˜Z; v(t =0)=0},<br />
}<br />
∂v<br />
∂t + A Xv ∈ L 2 ((0,T) × (0,X))<br />
,<br />
(43)<br />
where A X is the operator given by (36), (29) <strong>and</strong> (13), with the parameters<br />
(σ ∗ ,α ∗ ,ψ ∗ ). These spaces endowed with the graph norm are Banach spaces.<br />
We also need to introduce some functionals before stating the optimality<br />
conditions. We assume that u ∗ (T i ,x i ) >u ◦ (x i ), for all i ∈ I. It is clear from<br />
the continuity of u ∗ <strong>and</strong> from the uniform convergence of u ∗ ε n<br />
that there exists<br />
a positive real number a <strong>and</strong> an integer N such that for n>N, u ∗ ε n<br />
(t, x) ><br />
u ◦ (x) +ε n for all (t, x) such that |t − T i |