Etude des marchés d'assurance non-vie à l'aide d'équilibres de ...
Etude des marchés d'assurance non-vie à l'aide d'équilibres de ...
Etude des marchés d'assurance non-vie à l'aide d'équilibres de ...
You also want an ePaper? Increase the reach of your titles
YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.
tel-00703797, version 2 - 7 Jun 2012<br />
In the jointly convex case, we get<br />
3.2. Methods to solve <strong>non</strong>linear equations<br />
ˆV (x) = 0 and x ∈ X(x), (3.11)<br />
where the set X(x) = {y ∈ R n , g(yi, x−i) ≤ 0}. Still the computation of ˆ V is a complex optimization<br />
over a constrained set X(x). As in the previous subsection, the class of GNE called<br />
variational equilibrium can be characterized by the NI formulation. We have the folllowing<br />
theorem.<br />
Theorem. Assuming θi and g are C 1 functions and g is convex and θi player-convex. x ⋆ is<br />
a variational equilibrium if and only if x ⋆ ∈ X and V (x ⋆ ) = 0 with V <strong>de</strong>fined as<br />
V (x) = sup ψ(x, y).<br />
y∈X<br />
In the rest of the paper, we do not study all algorithms but rather focus on the most<br />
promising ones. We restrict our attention to general GNEPs and algorithms to solve the KKT<br />
system presented in Subsection 3.1.1. So, we do not study jointly convex GNEPs for which<br />
special methods have been proposed in the literature. These two situations differs wi<strong>de</strong>ly,<br />
since in the general GNEP, we have to solve a <strong>non</strong>linear equation, while for the jointly convex<br />
case, we solve a fixed point equation or a minimization problem.<br />
3.2 Methods to solve <strong>non</strong>linear equations<br />
As introduced in many optimization books, see, e.g., Dennis and Schnabel (1996); Nocedal<br />
and Wright (2006); Bonnans et al. (2006), an optimization method to solve a <strong>non</strong>linear<br />
equation or more generally to find the minimum of a function is ma<strong>de</strong> of two components:<br />
a local method and a globalization scheme. Assuming the initial point is not “far” from the<br />
root or the optimal point, local methods use a local approximation of the function, generally<br />
linear or quadratic approximation based on the Taylor expansion, that is easier to solve. The<br />
globalization studies adjustments to be carried out, so that the iterate sequence still converges<br />
when algorithms are badly initialized.<br />
To emphasize the prominent role of the globalization, we first look at a simple example of<br />
a <strong>non</strong>linear equation. Let F : R2 ↦→ R2 be <strong>de</strong>fined as<br />
<br />
x2 F (x) = 1 + x2 2<br />
− 2<br />
e x1−1 + x 3 2 − 2<br />
This function only has two roots x ⋆ = (1, 1) and ¯x = (−0.7137474, 1.2208868). We notice that<br />
the second component of F explo<strong><strong>de</strong>s</strong> as x1 tends to infinity.<br />
On Figure 3.2, we plot the contour level of the norm ||F (x)||2, as well as two iterate<br />
sequences (xn), (yn) (see numbers 0, 1, 2,. . . ) starting from the point (x0, y0) = (−1, −3/2).<br />
The first sequence (xn) corresponds to a “pure” Newton method, which we will present after,<br />
whereas the second sequence (yn) combine the Newton method with a line search (LS). We<br />
can observe the sequence (yn) converges less abruptly to the solution x ⋆ than the sequence<br />
(xn).<br />
On Figure 3.3, we plot the contour level of the norm ||F (x)||2 with two iterate sequences<br />
(xn), (yn), for pure and line-search Newton, respectively. But this time, sequences are initiated<br />
at (x0, y0) = (2, 1/2). Despite being close the solution ¯x, the pure sequence (xn) wan<strong>de</strong>rs in<br />
<br />
.<br />
143