28.05.2013 Views

Etude des marchés d'assurance non-vie à l'aide d'équilibres de ...

Etude des marchés d'assurance non-vie à l'aide d'équilibres de ...

Etude des marchés d'assurance non-vie à l'aide d'équilibres de ...

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

tel-00703797, version 2 - 7 Jun 2012<br />

1.8. Appendix<br />

For example, the normal distribution N (µ, σ 2 ) with θ = µ and φ = σ 2 , see Clark and Thayer<br />

(2004) for <strong>de</strong>tails.<br />

Fitting procedure<br />

To <strong>de</strong>termine the parameter vector β, we use the maximum likelihood estimation. For n<br />

observations, the log-likelihood of the mo<strong>de</strong>l given a distribution from the exponential family<br />

is written as follows:<br />

ln(L(θ1, . . . , θn, φ, y1, . . . , yn)) =<br />

n<br />

<br />

yiθi − b(θi)<br />

i=1<br />

a(φ)<br />

<br />

+ c(yi, φ) . (1.6)<br />

Let us <strong>de</strong>fine µi = E(Yi) and ηi = g(µi) = Xiβ, the linear prediction where i is the number of<br />

the observation, n the total number of observations.<br />

For all i and j,<br />

∂ ln(Li)<br />

=<br />

∂βj<br />

∂ ln(Li)<br />

×<br />

∂µi<br />

∂µi<br />

∂βj<br />

Maximum likelihood equations are then: ∂ ln(Li) <br />

i = ∂βj<br />

all j. Therefore, we get the equations, as a function of the βi’s:<br />

∂ ln(Li)<br />

=<br />

∂βj<br />

<br />

i<br />

i<br />

= (g −1 ) ′ (g(µi)) × yi − µi<br />

V ar(Yi) Xij.<br />

i (g−1 ) ′ (g(µi)) × yi−µi<br />

V ar(Yi) Xij = 0, for<br />

(g −1 ) ′ (Xiβ) × yi − g−1 (Xiβ)<br />

(b ′ ) −1 (g−1 (Xiβ)) Xij = 0. (1.7)<br />

These equations are not linear with respect to the βis, and cannot be solved easily. As always<br />

for complex equation, we use an iterative algorithm to find the solution. Most softwares, such<br />

as R, use an iterative weighted least-squares method, see Section 2.5 of McCullagh and Nel<strong>de</strong>r<br />

(1989).<br />

Link functions for binary regression<br />

Log-likelihood for ca<strong>non</strong>ical link Using the expression of the variance function and the<br />

ca<strong>non</strong>ical logit function (g −1 (x) = 1<br />

1+e −x and (b ′ ) −1 (x) = x(1 − x)), Equation (1.7) becomes<br />

0 = e−ηi 1 + e−ηi × yi − 1<br />

1<br />

i<br />

1+e−ηi 1+e−η e<br />

i<br />

−ηi 1+e−ηi Xij = <br />

(yi(1 + e −ηi ) − 1)Xij,<br />

for j = 1, . . . , p. These equations are called the likelihood equations. If we put it in a matrix<br />

version, we get the so-called score equation<br />

X T (Y − µ(β)) = 0.<br />

Thus, the Fisher information matrix for β in the case of logit link is<br />

I(π) △ <br />

∂2 ln L<br />

= −E<br />

= diag(πi(1 − πi)).<br />

∂βj∂βk<br />

Since we use the maximum likelihood estimator, the estimator ˆ β has the good property of<br />

being asymptotically unbiased and Gaussian with variance matrix approximated by Fisher<br />

information I(π( ˆ β)) ∗ .<br />

∗. see subSection 4.4.4 of McCullagh and Nel<strong>de</strong>r (1989).<br />

i<br />

75

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!